Jump to content
Sign in to follow this  

10gig Networking slow

4 posts in this topic Last Reply

Recommended Posts

I have a 10 gig mellanox card installed in my unraid as well as in a windows 10 pc.  They are connected directly (no switch) with a copper SFP+, 2ft cable.  The Windows 10 PC has an NVME Samsung SSD.
When I start a file copy of a 10-30gig file, from the PC to the unraid cache drive, it starts out running at 700+MB/s for several seconds, and then quickly drops, either bottoming out at zero where it will just sit as though it's been paused for 10-30 seconds until it starts up again, OR it will drop to around gigabit speeds (100ish MB/s) and continue the transfer at that speed (the later is the more frequent scenario).   I'd be happy to see 300-400MB/s sustained, but most of the time I average worse than a gigabit connection.

I have a 2nd network card in unraid (gigabit) and copies to and from the cache drive using this card are solid and easily saturate my gigabit link without much fuss (112MB/s).  No bottoming out or other performance issues using the 1 gig link, it saturates my link every time.

I've tried 3 cache combinations on unraid.  I started out with 4,120gig SSDs in a cache pool, moved to a single 500gig SSD and now to a single 960gig (modern) SSD.  The problem is identical with all of them, terrible 10gig performance.

At first I thought it was a RAM cache filling up, however I've noticed that it doesn't always start at 700MB/s, sometimes it starts out really slow, 30-40MB/s, chugs along there for awhile, might jump up to somewhere between 200-700MB/s briefly, then comes plunging back down again.  As I'm writing this, I'm getting 32MB/s writes on the 10gig link.  If I cancel it and start the same copy over my 1gig connection it's a solid 112MB/s.

I've gone through post after post this morning and have tried numerous things including setting my MTU at 9000 on both NICs, enabling direct I/O, insuring my SSD was trimmed, disabling any docker or VMs running, replacing my SFP+ cable, checking performance (RAM and CPU usage are low), etc... nothing has had the slightest impact.  

I'm running unraid version 6.5.3 on a core i7 930 and am out of ideas.


Edited by shutterbug

Share this post

Link to post

What kind of read speeds are you getting from unraid? My write speeds reflect yours when writing to shares that are on the array sometimes, direct cache writes give me 350mb/s sustained. I can read from cache only at 1.1-1.2gb/s. 


Reading direct from the array I am limited to drive speed, which is anywhere from 170mb/s - 100mb/s. 


I noticed issues with the 10gbit links slowing down randomly for no apparent reason and kind of gave up trying to find a solution for it. If you find one i'l love to know what's going on. 


My specs -

Xeon D-1541

32gb 2400ddr4,

256gb 960 evo cache

8tb seagate ironwolf x4


See here my thread here Slow NVME transfers

Share this post

Link to post

Read speeds from the cache drive over 10gig are where I'd expect them to be, i.e. 325-350MB/s (i.e. max read speed of the SSD on it's current interface).  It is writes where it falls apart.

Edited by shutterbug

Share this post

Link to post

I neglected to post my unraid specs for reference:

Core i7 930

24gig DDR3 RAM

1 960 gig crucial Sata SSD 

8 3.5" sata  drives, ranging from 2-6TB each, 22TB total

1 parity drive (6TB)

Intel 1gig pcie NIC

Mellanox 10gbe single port card (installed in 16x slot)

2 Adaptec PCIe dual port SATA cards



Share this post

Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this