Jump to content
crowdx42

Network Question

42 posts in this topic Last Reply

Recommended Posts

AFAIK, standard bonding (LACP) only helps if you have lots of clients contending to access the server, since the bonding does some hashing with the client MAC address to pick which link to use. So in this case a single client will max out its transfer at 128GB/s (including protocol overhead). LACP is mainly for high availability and scalability with many clients - not high capacity for single/few clients

 

CISCO and probably the big names have a proprietary bonding called ethernet trunking (?) which from the HW point of view (of the switch) it aggregates the interfaces together and treats them as a single link. AFAIK, this only works between same vendor switches.

 

Share this post


Link to post
10 hours ago, ken-ji said:

AFAIK, standard bonding (LACP) only helps if you have lots of clients contending to access the server, since the bonding does some hashing with the client MAC address to pick which link to use. So in this case a single client will max out its transfer at 128GB/s (including protocol overhead). LACP is mainly for high availability and scalability with many clients - not high capacity for single/few clients

 

CISCO and probably the big names have a proprietary bonding called ethernet trunking (?) which from the HW point of view (of the switch) it aggregates the interfaces together and treats them as a single link. AFAIK, this only works between same vendor switches.

 

Well my number 1 goal in the update is to reduce eliminate buffering on the main server when performing data dumps to it and trying to stream video at the same time. If I achieve that, then I will be happy :)

Share this post


Link to post
10 hours ago, ken-ji said:

AFAIK, standard bonding (LACP) only helps if you have lots of clients contending to access the server, since the bonding does some hashing with the client MAC address to pick which link to use. So in this case a single client will max out its transfer at 128GB/s (including protocol overhead). LACP is mainly for high availability and scalability with many clients - not high capacity for single/few clients

128 gigabytes per second? Sign me up! I'd even settle for 125GB/s. :x

Other than the unit used, my unRAID's 4-port LACP bond agrees with your statements.

Share this post


Link to post

Did you guys ever take a look at the video linked below? From what it is describing, a lot of the issue is the settings in Windows.

 

Share this post


Link to post
19 hours ago, crowdx42 said:

Did you guys ever take a look at the video linked below? From what it is describing, a lot of the issue is the settings in Windows.

As far as I know SMB3 multi channel support in Samba is still only experimental, so probably won't be an option for some time...

Share this post


Link to post
7 minutes ago, gubbgnutten said:

As far as I know SMB3 multi channel support in Samba is still only experimental, so probably won't be an option for some time...

 

Yes, it should be used for testing only as there's a chance of data corruption, also never got it work with unRAID.

Share this post


Link to post

So I just ran into a snag on the NIC card, I am using all my PCIe x16 slots, all that I have left on the board are PCIex1 which is a different socket. Are there any 4 port Gigabit NICs that works with unRAID that are straight PCIex1 ? I see on Amazon a Symba Card but I have no idea if that would work. Thoughts?

I could also just pick up this board and swap it with the existing board.

https://www.newegg.com/Product/Product.aspx?Item=N82E16813157369R

Also one other comment, the CISCO switch arrived and I have plugged it in next to my main unRAID server. My initial thought was to then plug in the 4 port nic direct into it and then a single cat6a cable back to my router. My question, with that config would vlan still work to manage the ip camera traffic on it's own vlan with ip cameras and the pc that runs as a dvr, then from there I could port forward the dvr PC via my router. Would this work and separate the heavy traffic from the ip cams or would the switch have to have everything plugged direct into the CISCO switch and then use the router for port forwarding etc?

Share this post


Link to post

I think you want your new Cisco switch to be your core switch, so have everything plugged into it. As far as VLAN's go they are more for security and isolating traffic, I don't think it necessary on your home network and that switch is certainly capable of handling any traffic your cameras or servers can throw at it.

Share this post


Link to post

So with the CISCO as the core switch, should I turn off the DHCP on my router or still leave the router doing the DHCP?

Share this post


Link to post

I'm not sure if anyone noticed but the picture you uploaded of your Asus router shows you using 4-6MB/s. If your camera traffic really is only that much, and is traveling over the router, than its hardly any traffic at all. I may be missing something but it sounds like a different issue. I have a few cameras on my network and I have their settings turned down quite a bit so they use almost no traffic. If your bit rate and fps are low enough then you can mitigate the camera bandwidth. I would take a top down view of your network segments, specifically the wiring of file server to the backup server, server to media player and cameras to DVD computer. If any of these run on the same cable or the same switch fed from the same cable you could run into issues. I just finished transferring about 80TB of data from my Unraid server through my network to my backup computer at the other end. All while streaming videos on my and my wife's computer. I have a single cable going to my server room switch and all that traffic from all of my clients travels down one wire. So I should have experienced something similar I would think.

I'm wondering if it was not a network traffic issue but a potential disk io problem. Perhaps the media you were streaming was on one of the drives being hampered by the file transfer.

Curious to find a solution. I'm late to this party if your already ordering new network cards but I felt obligated to discuss some of the basic things I saw. But it is past my bed time so maybe I'm seeing things that aren't there.


Sent from my iPhone using Tapatalk

Share this post


Link to post

Alex.Vision I agree on your comments but the only place in my setup where it shares bandwidth with my ip cams is the living room tv (which is where we get the buffering) and 3 rear ip cameras. I do have extra cat6 cable which I could do a separate run to split them but I have not done this as yet. Although I would think the bandwidth used by 3 ip cams plus streaming 1080p should be easily handled by a single cat 6 cable at 75ft run. When I install the CISCO switch correctly  I will be pulling out several of the Netgear GS108 switches, I have 2 right at the router to expand it's ports, so those will go away.

 

Share this post


Link to post

So to resurrect this thread with another question. I installed a 4 port Intel NIC and I am wondering if I plug in 4 RJ45 cables, how will unRAID deal with it. I currently have unRAID setup with a static ip. If I set additional static ips would I be able to connect through other ports on the NIC to offload server to server backups? How are other folks using additional ports with multi-port NICs?

Thanks for any insight

Patrick

Share this post


Link to post

I think Alex.vision has identified what the issue could be.  You are mixing 100mb and 1000mb traffic on gigabit link between switches.  Flow control is most likely going to be your issue not bandwidth.  This link does a fair job of explaining it.  As you have already read from others, isolating the 100mb traffic from the 1gb traffic especially from the gigabit links between the GS108's should be your goal.  Quick test is to unplug all 100mb devices/traffic from your network and do your simultaneous file transfer and streaming and see if you get your buffering issues.  Also limit the number of daisy chained switches - the fewer hops the better.  The suggestion of the Cisco switch is a step in the right direction and suggest you do run a separate cable from the TP-Link 100mb switches back to it vs going through the GS108's.

 

Another thing to consider is your server itself and how it handles high disk I/O.  You may have tons of network bandwidth, but if the server cannot read/write the load without bottlenecking it will not matter what you do outside your server.  View the stats via the stats plugin to see what is going on during your simultaneous file transfers and streaming to your NUC.  Watch CPU load vs disk vs network.

 

To answer your question about your 4-port NIC on a PCIe-x1, you will have enough bandwidth for roughly two out of the four ports.  As far as how you utilize the four ports, I don't think using all four will benefit you in any way.  Two perhaps in a balance-rr config on unRAID to the Cisco with a two-port trunk/LAG (if the Cisco is capable) which would net a 2gb link from the Cisco to the server, but no single device will realize that bandwidth unless running Linux or Windows Server with same bonding config enabled.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.