User share network transfer speeds slower?


Recommended Posts

Hi,

 

I have 10Gb local network. When using User shares I only get 250-300MB/s. If I use Drive shares I get the full speed of my devices. For example my cache is SSD I can get 600-700MB/s steady when utilizing the Drive share. Is there any settings I can check out to enhance User share transfer speeds? MTU/Jumbo does nothing. Is overhead of User shares that much?? 

 

I have used iperf to check speeds. I get 7Gbit steady. So my network isn't perfect but it's still better than 250MB/s.

 

Thanks

 

Link to comment
1 minute ago, trurl said:

I recommend not sharing drives on the network. There a few different ways you can get yourself in trouble with that.

I only used it to test. It was a post about slow speeds and they asked for drive share and iperf results. So I did it before hand.  I forget who it was but they also said do not recommend using it.

Link to comment
  • 2 weeks later...
36 minutes ago, escocrx said:

just an update. 6.8.1 has doubled my user share speeds. 

So far, I have seen the same thing.  User share folder/file display and access seems to be much faster in 6.8.1 than 6.8.0. 

 

Lately, 6.8.0 for me had become as slow as 6.7.x had been.

Edited by Hoopster
Link to comment

Good to hear. Had been putting off 6.8 as massive drop in 10G based share performance and stability. Maybe i will try again

Nah, moving back again. Still a 200MB/sec drop for my shares read speed accessing cached data (raid10ssds) from 800+MB/s to about 600MB/s.
So far 6.7.2 is the end station for me.
Link to comment
  • 2 weeks later...
On 1/12/2020 at 3:42 AM, glennv said:


Nah, moving back again. Still a 200MB/sec drop for my shares read speed accessing cached data (raid10ssds) from 800+MB/s to about 600MB/s.
So far 6.7.2 is the end station for me.

I have noticed the same thing as glennv with regards to user shares transfer speed on 6.8.1. My write speed over my 10Gb network to NVMe cache drive on 6.7.2 usually hovers around 500-600MB/s but it dropped to around 300-350 MB/s when I upgraded to 6.8.1.

 

I reverted back to 6.7.2 and the transfer speed went back to the previously observed speed of 500-600MB/s. There appear to be some issues going on with 6.8.1 with regards to user shares transfer speed so I will stick with 6.7.2 for now as well.

  • Like 1
Link to comment

tnx for that. Was afraid i was all alone here but also my use case is not typical so not everyone might spot it even if affected. There where other smb performance related issues mentioned but my performance drop was so hard. I did dozens of tests low level, networking etc etc and all the same conclusion. Something is wrong. 

Its just i could never pin down the cause as all variables involved changed.

Network driver (intel 10g), smb layer, fuse driver, maybe cache memory management , some tuning parameters are gone and new once are there. etc etc 

 

 What type/brand of 10g are you using . Maybe we can find a common denominator other then the release

Link to comment

I re-upgraded to 6.8.1 again tonight to see if I can figure out the cause of the speed drop but came up empty.  I turned on disk shares and tested it and noticed that the transfer speed to the cache disk share remained the same on both 6.7.2 and 6.8.1  at around 700-750MB/s. So whatever the issue is, it seems like the user share transfer was the only one that was hit with this approximately 200+ MB/s drop.

 

My Unraid server 10G network hardware :

     - 1 Mellanox ConnectX-2 PCIe x8 NIC 10G Dual Port Server Adapter 81Y1541

     - 2 Avago AFBR-703ASDZ-E2 10GbE Ethernet 10GBASE-SR SFP+  Transcievers

     - 2 5m LC-LC OM3 Multimode Fiber Optic Patch cables

 

My Client PC 10 G network hardware :

    - 1 MNPA19-XTR 10GB MELLANOX CONNECTX-2 PCI-E 10G SFP+ NETWORK CARD

    - 1 10G SFP+ DAC Cable - 10GBASE-CU Passive Direct Attach Copper Twinax SFP Cable

 

Everything is connected to a MikroTik CRS309-1G-8S+ with 8 SFP+ 10Gbps Ports running SwitchOS

 

 

The above setup have been working fine for the past 2 years and nothing was changed when the upgrade to 6.8.1 was done. I've tested without going through the Microtik switch and also with just using the 2 fiber patch cables instead of the Twinax and same result.

 

 

Link to comment

As we use completely different network gear and drivers, we should be able to safely exclude that as a cause.

Your test points more to the new fuse driver and anything related to how that is managed as you bypass that when using drive shares instead of the array shares. I saw the same.

 

Link to comment
  • 2 weeks later...

I'm not sure about the verison change but I'm installing UnRaid for the first time and I also have a dual port 10G SFP+ NIC with a DAC to my switch.  I am only able to get 3Gbps. I've tested this with iperf as well which confirmed.  Is it the version?   This i crazy slow. I was just using these components on a FreeNAS machine and I could easily saturate the entire 10G link with this hardware. 

Link to comment
1 hour ago, pish180 said:

I'm not sure about the verison change but I'm installing UnRaid for the first time and I also have a dual port 10G SFP+ NIC with a DAC to my switch.  I am only able to get 3Gbps. I've tested this with iperf as well which confirmed.  Is it the version?   This i crazy slow. I was just using these components on a FreeNAS machine and I could easily saturate the entire 10G link with this hardware. 

I have Asus (Aquantia AQC107) 10G NICs

Some quick tests between my two Unraid servers (6.8.2)

 

Default MTU (1500 bytes)

# iperf3 -i0 -t20 -c 10.0.101.11
Connecting to host 10.0.101.11, port 5201
[  5] local 10.0.101.12 port 45694 connected to 10.0.101.11 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[  5] 0.00-20.00 sec 21.7 GBytes 9.34 Gbits/sec 0 314 KBytes

Jumbo frames (9198 bytes)

# iperf3 -i0 -t20 -c 10.0.101.11
Connecting to host 10.0.101.11, port 5201
[  5] local 10.0.101.12 port 45690 connected to 10.0.101.11 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[  5] 0.00-20.00 sec 22.9 GBytes 9.83 Gbits/sec 0 429 KBytes

 

Link to comment
  • 1 year later...
On 1/21/2020 at 8:44 PM, ibquan said:

I re-upgraded to 6.8.1 again tonight to see if I can figure out the cause of the speed drop but came up empty.  I turned on disk shares and tested it and noticed that the transfer speed to the cache disk share remained the same on both 6.7.2 and 6.8.1  at around 700-750MB/s. So whatever the issue is, it seems like the user share transfer was the only one that was hit with this approximately 200+ MB/s drop.

 

My Unraid server 10G network hardware :

     - 1 Mellanox ConnectX-2 PCIe x8 NIC 10G Dual Port Server Adapter 81Y1541

     - 2 Avago AFBR-703ASDZ-E2 10GbE Ethernet 10GBASE-SR SFP+  Transcievers

     - 2 5m LC-LC OM3 Multimode Fiber Optic Patch cables

 

My Client PC 10 G network hardware :

    - 1 MNPA19-XTR 10GB MELLANOX CONNECTX-2 PCI-E 10G SFP+ NETWORK CARD

    - 1 10G SFP+ DAC Cable - 10GBASE-CU Passive Direct Attach Copper Twinax SFP Cable

 

Everything is connected to a MikroTik CRS309-1G-8S+ with 8 SFP+ 10Gbps Ports running SwitchOS

 

 

The above setup have been working fine for the past 2 years and nothing was changed when the upgrade to 6.8.1 was done. I've tested without going through the Microtik switch and also with just using the 2 fiber patch cables instead of the Twinax and same result.

my issue

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.