10gig Networking slow


Recommended Posts


I have a 10 gig mellanox card installed in my unraid as well as in a windows 10 pc.  They are connected directly (no switch) with a copper SFP+, 2ft cable.  The Windows 10 PC has an NVME Samsung SSD.
When I start a file copy of a 10-30gig file, from the PC to the unraid cache drive, it starts out running at 700+MB/s for several seconds, and then quickly drops, either bottoming out at zero where it will just sit as though it's been paused for 10-30 seconds until it starts up again, OR it will drop to around gigabit speeds (100ish MB/s) and continue the transfer at that speed (the later is the more frequent scenario).   I'd be happy to see 300-400MB/s sustained, but most of the time I average worse than a gigabit connection.

I have a 2nd network card in unraid (gigabit) and copies to and from the cache drive using this card are solid and easily saturate my gigabit link without much fuss (112MB/s).  No bottoming out or other performance issues using the 1 gig link, it saturates my link every time.

I've tried 3 cache combinations on unraid.  I started out with 4,120gig SSDs in a cache pool, moved to a single 500gig SSD and now to a single 960gig (modern) SSD.  The problem is identical with all of them, terrible 10gig performance.

At first I thought it was a RAM cache filling up, however I've noticed that it doesn't always start at 700MB/s, sometimes it starts out really slow, 30-40MB/s, chugs along there for awhile, might jump up to somewhere between 200-700MB/s briefly, then comes plunging back down again.  As I'm writing this, I'm getting 32MB/s writes on the 10gig link.  If I cancel it and start the same copy over my 1gig connection it's a solid 112MB/s.

I've gone through post after post this morning and have tried numerous things including setting my MTU at 9000 on both NICs, enabling direct I/O, insuring my SSD was trimmed, disabling any docker or VMs running, replacing my SFP+ cable, checking performance (RAM and CPU usage are low), etc... nothing has had the slightest impact.  

I'm running unraid version 6.5.3 on a core i7 930 and am out of ideas.

 

Edited by shutterbug
Link to comment

What kind of read speeds are you getting from unraid? My write speeds reflect yours when writing to shares that are on the array sometimes, direct cache writes give me 350mb/s sustained. I can read from cache only at 1.1-1.2gb/s. 

 

Reading direct from the array I am limited to drive speed, which is anywhere from 170mb/s - 100mb/s. 

 

I noticed issues with the 10gbit links slowing down randomly for no apparent reason and kind of gave up trying to find a solution for it. If you find one i'l love to know what's going on. 

 

My specs -

Xeon D-1541

32gb 2400ddr4,

256gb 960 evo cache

8tb seagate ironwolf x4

 

See here my thread here Slow NVME transfers

Link to comment

I neglected to post my unraid specs for reference:

Core i7 930

24gig DDR3 RAM

1 960 gig crucial Sata SSD 

8 3.5" sata  drives, ranging from 2-6TB each, 22TB total

1 parity drive (6TB)

Intel 1gig pcie NIC

Mellanox 10gbe single port card (installed in 16x slot)

2 Adaptec PCIe dual port SATA cards

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.