Directly Connect Two unRAID Servers Together


Recommended Posts

So I want to connect two of my unraid servers together. I don't have a managed switch so I can't take advantage of bonding.  With 2 NIC's on each server, I want one port to be directly connected between the two. The other port on each server will be connected to the router.

 

 

Is this possible? If so how would I set this up?

Link to comment

So I want to connect two of my unraid servers together. I don't have a managed switch so I can't take advantage of bonding.  With 2 NIC's on each server, I want one port to be directly connected between the two. The other port on each server will be connected to the router.

 

 

Is this possible? If so how would I set this up?

 

Well, the first thing you would need would be a special Cat5E cable with the leads cross-connected so that the data out pair(s) of one computer becomes the data-in pair(s) on the other.  (This was sometimes done for a two computer network back in ancient times when hubs were very expensive.)  You also need to assign static IP addresses for these two ports. 

 

I also have the same question as trurl...

Link to comment

Well, the first thing you would need would be a special Cat5E cable with the leads cross-connected so that the data out pair(s) of one computer becomes the data-in pair(s) on the other.

As long as both network cards follow GB specs, crossover cables aren't necessary. Auto-MDIX is part of the GB spec, so the cards will figure out automatically what pairs to use.

 

Real question is what is the end goal, because writing to a parity protected array with current common hardware can't even saturate 1 GB connection.

  • Like 1
Link to comment

Should have known everyone would be curious.

 

Reason for wanting this is:

My current server has all my files/media on it. I've built a new server that I want to offload all the work of the plugins/dockers but all my data still resides on the current (becoming archive) server.

 

I want to use the now archive server (won't be running any dockers) as a dumb box to server media to the new production server that will do doing the heavy lifting (plex/emby etc). So with all the traffic happening on the production server (serving files from it locally and serving files mounted via NFS from the archive server to multiple clients) I feel it could be possible to saturate a gigabit connection during read from multiple clients.

 

Have I explained that well enough?

 

I'm welcome to have my logic questioned and told I'm crazy.

Link to comment

... Reason for wanting this is:

My current server has all my files/media on it. I've built a new server that I want to offload all the work of the plugins/dockers but all my data still resides on the current (becoming archive) server.

 

I want to use the now archive server (won't be running any dockers) as a dumb box to server media to the new production server that will do doing the heavy lifting (plex/emby etc). So with all the traffic happening on the production server (serving files from it locally and serving files mounted via NFS from the archive server to multiple clients) I feel it could be possible to saturate a gigabit connection during read from multiple clients.

 

Have I explained that well enough?

 

I'm welcome to have my logic questioned and told I'm crazy.

 

A couple thoughts ...

 

First, electronically you can certainly do what you've asked -- just run a cable from the 2nd NIC on the archive server to the new production server.  What I do NOT know is whether or not UnRAID supports this configuration ... but my guess is it does NOT.    I'm by no means an expert on bonding -- but if I'm reading this article [ http://www.enterprisenetworkingplanet.com/linux_unix/article.php/3850636/Understanding-NIC-Bonding-with-Linux.htm ] correctly, you would need 802.3ad switch support to configure one NIC to handle a specific IP (the other server) and the other NIC all other traffic.   

 

Second, it's not at all clear that there's any need to do what you've suggested.  I presume you built a new server because you need more "horsepower" (CPU power) and perhaps slots to handle the computational load you'd like your server to do [plugins, Dockers, VM's, etc. with perhaps some transcoded output streams].    I would simply move your disks to this server as well -- the overhead of managing the array adds very little to the computational load, and accessing the array locally within the server will be FAR faster than any access over a Gb network -- even if it was a dedicated network between the two servers.    I'd use your original server as a backup server -- and simply access it via your existing network during off hours for updating your backups.

 

 

 

Link to comment

The thing about 1000BaseT Ethernet is that it uses all four pairs in the cable simultaneously and bidirectionally, unlike the older 100BaseTx and 10BaseT standards, which used only the green pair in one direction and the orange pair in the other direction. Old style crossover cables simply crossed over those two pairs. With 1000BaseT crossover cables are neither needed nor appropriate.

Link to comment

Should have known everyone would be curious.

 

 

I want to use the now archive server (won't be running any dockers) as a dumb box to server media to the new production server that will do doing the heavy lifting (plex/emby etc). So with all the traffic happening on the production server (serving files from it locally and serving files mounted via NFS from the archive server to multiple clients) I feel it could be possible to saturate a gigabit connection during read from multiple clients.

 

Why not set up a VM on the new server and pass  the second NIC port on the MB to that VM and set up a PLEX server on the VM.  You could still have a PLEX docker running which would be using the first NIC port.  Thus, you would have two PLEX servers in the same box each with an assigned NIC.

Link to comment

Thanks for looking into this and the link to that article Gary. What I took away from that article is that I would want to use balance-alb which doesn't require the use of a switch.

 

Of course a managed switch would be the obvious solution. But another scenario could be the use 10Gbe cards. This could also apply to wanting a fast link between two servers without the need to buy a 10Gbe switch. I bring this up as I also have a few 10Gbe cards laying around and the idea of a fast link between to servers without the need of a switch may appeal to others.

 

And yes Gary, moving my drives to the new server is a logical approach. I'm not concerned about the overhead of managing the array hurting performance on the server. But for various reasons, some reasons better then other I want to keep the setup as is. Space is one reason.

 

If this isn't possible or is more difficult then its worth I'll just drop the idea. But like you said its electronically possible just not sure if unraid would support this type of configuration. And from my limited reading thus far that configuration would be balance-alb.

Link to comment
  • 3 years later...

Has anyone made and headway on this topic. I am in this exact scenario, I recently sold one of my servers as I was running my array on an external jbod supermicro chassis and my cisco server was doing the lifting. I removed the cisco and moved a low power system into the supermicro chassis and now have a 10gb connection direct to my main pc(desktop) in my office that now runs a second unraid os to do the lifting. 3 reasons; Noise, power, newer hardware, is why I'm doing this. I want to keep my 26 disks in the supermicro chassis and utilize the 10gb link I have between the 2 systems. I have managed to get this to work with no issue over smb and nfs on a 1gb connection host to host via routing however I cannot get a direct link via 10g over an optical cable. my limitation I believe is in not having a gateway to tell the 2 hosts where to look on the separate subnet. Funny part is this all works like a dream in windows 10, I set a second 10g nic port to static 192.168.2.3 and unraid host 10g nic to 192.168.2.2 and they work with no issue in the windows file system. I have a 10gb managed switch but its loud and uses a crap load of power vs a active sfp+ cable which is the preferred option. If anyone has any thoughts or ideas please let me know. I am Ok at debian but no clue what to start with in slackware command line.

 

Cheers

Link to comment

Slackware is Linux.  Debian is Linux.  Generally what works on one command line will work on the other.   You may find that there may be a few Utilities (and commands) missing from the Unraid version of Slackware because they aren't necessary for the normal functioning of Unraid.  There are also a few Unraid only commands added to its distribution like the ones to reboot and power down the system since the array has be stopped first. 

Link to comment

The servers have 2 seperate networks. 1gb is connected to a managed switch and router. 10gb is not. These are direct links, think of it as crossover. Over the 1gb link servers see and can ping each other without issue. The 10gb is direct from port 1 of the 10gb nic in middle server to the 10gb nic on first server. Port 2 on middle server is connected direct to 10gb port on windows machine. Windows to middle server works without issue. However middle server to first server does not. This is where I need a solution. 1gb is 192.168.1.0 255.255.255.0. 

10gb is 192.168.2.2, 192.168.2.3, 192.168.2.4, 192.168.2.5 no gateway no subnet direct link.

Link to comment
1 hour ago, Fuoman said:

The servers have 2 seperate networks. 1gb is connected to a managed switch and router. 10gb is not. These are direct links, think of it as crossover. Over the 1gb link servers see and can ping each other without issue. The 10gb is direct from port 1 of the 10gb nic in middle server to the 10gb nic on first server. Port 2 on middle server is connected direct to 10gb port on windows machine. Windows to middle server works without issue. However middle server to first server does not. This is where I need a solution. 1gb is 192.168.1.0 255.255.255.0. 

10gb is 192.168.2.2, 192.168.2.3, 192.168.2.4, 192.168.2.5 no gateway no subnet direct link.

 

I am  not a networking Guru but I think you need to provide a couple of more items.  First, is the Diagnostics file   Tools    >>>  Diagnostics    and second, a drawing of your networking which shows pictoritally what you are attempting to describe above.  Be sure that you show all connections, IP addresses, link speed, servers and switches.   

Link to comment

hello Frank1940, this is not a diagnostics issue, it is not an issue with the servers or a configuration issue, it is the lack of a configuration that i need to figure out. 

 

my network is setup like this starting from first server and router.

 

cisco 8port router. port 4 to onboard nic on first server unraid1 ip address is 192.168.1.81 this is the default port for webgui, docker access, truncated vm access using the internal routing in unraid. all my apps and things are router a 172.x.x.x ip. my second server is unraid2 connects to port 6 on my router with ip 192.168.1.112 this is the server that is new. Then windows pc is 192.168.1.45 on port 2 of router. This is all irrelevent as all 3 systems see each other without a problem however 1gb is slow.

 

unraid1 array 1gb ports                                                                     unraid2 main workstation                                          windows workstation

default 192.168.1.81        192.168.2.2 10gb port           -------->  192.168.2.3 10gb port unraid2 , 192.168.2.4 10gb             ----- >     192.168.2.5 10gb port

docker 192.168.1.82                                                                vm  192.168.1.134                                                                    192.168.1.45 1gb windows ip

docker 192.168.1.83                                                                vm  192.168.1.132

vm       192.168.1.84                                                               default 192.168.1.112 

 

everything with 192.168.1.x is all part of my home network all ip are assigned static

everything with 192.168.2.x is not part of any router/switch. These are straight through to each other unraid 1 has a single mellenox x3 10gb nic with a 25m cable to port 1 on a dual intel x540 10gb nic on unraid2, port 2 goes to windows pc with mellenox x2 10gb port.

windows recognizes unraid2 on 10gb and i can map shares to runaid 2 only(by design) i cannot get unraid 2 to mount a share from unraid 1 through 10gb only 1gb. i would like to use samba or nfs to mount a media share on unraid 1 to unraid 2 for my dockers to access using 10gb. what script,commands,routing or network wizardry can i do to make nfs or samba see the shares using these ip addresses for 10gb

Link to comment
3 hours ago, Fuoman said:

Windows to middle server works without issue.

And according the picture attach, Unraid2 have dual port in two 192.168.2.x IP in same subnet, that will be problem. Assume they are a dual port NIC, but this not means they are inter connected. ( As I know, dual port NIC can connect in hand by hand to eliminate extra switch, but this should need some configuration to work ).

 

You should try setting Unraid1 to Unraid2 in 3rd subnet.

Edited by Benson
Link to comment

I have done this, just now. had to reboot my server and waiting to try connection. Ive tried to confirm all settings are set correct for smb and nfs sharing as well. i will try now with 192.168.3.2 and 192.168.3.3 for unraid 1 and unraid 2 then 192.168.2.3 and 192.168.2.4 for unraid2 and windows.

Link to comment
4 minutes ago, Fuoman said:

I have done this, just now. had to reboot my server and waiting to try connection. Ive tried to confirm all settings are set correct for smb and nfs sharing as well. i will try now with 192.168.3.2 and 192.168.3.3 for unraid 1 and unraid 2 then 192.168.2.3 and 192.168.2.4 for unraid2 and windows.

Next you could try bridge 2 10G port in Unraid2 in 192.168.2.x

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.