Jump to content
tyrindor

Why are my disk to disk SMB transfers slow?

8 posts in this topic Last Reply

Recommended Posts

I am currently running no parity drive (temporarily!), as I do a lot of disk to disk transfers and restructuring/organization. I am on 10G with jumbo frames set to 9000 on both the windows 10 PC and unRAID. DirectIO is enabled and I am forcing SMB2_02 because in the past everything else has been very slow. All disks are the same speed - 8TB Seagate archive drives.

 

If I do "move cache" I see 160MB/s from cache to disk, but if I do a disk to disk transfer I see about 50MB/s. These are all large 30-50GB files. The drives are much faster than this, and benchmark at about 160-200MB/s read and writes. I am barely getting faster than I did with parity enabled...

 

Why is SMB slowing it down so much?

Edited by tyrindor

Share this post


Link to post

Could we assume that you are copying with the data flowing through your PC during the copy/move process?  I also seem to recall that Jumbo frames can sometimes be counter-productive in many cases.  

Share this post


Link to post

Jumbo frames are pretty normal on 10Gb/s Ethernet. The problems arise when people start using them on gigabit Ethernet because they are not backwards compatible and everything on the network segment in question needs to have them enabled. In other words, they are all or nothing. Because 10G switches are still so expensive most unRAID 10G users have a dedicated point-to-point link between their server and their main workstation, with everything else on standard gigabit. In that instance Jumbo frames are ideal.

Share this post


Link to post
2 hours ago, John_M said:

Jumbo frames are pretty normal on 10Gb/s Ethernet. The problems arise when people start using them on gigabit Ethernet because they are not backwards compatible and everything on the network segment in question needs to have them enabled. In other words, they are all or nothing. Because 10G switches are still so expensive most unRAID 10G users have a dedicated point-to-point link between their server and their main workstation, with everything else on standard gigabit. In that instance Jumbo frames are ideal.

 

Do you think I should disable jumbo frames on my 1G connection then? I am using 1G to a switch/router, and 10G is a direct link between 2  Mellanox ConnectX-2 controllers. I was actually gonna try disabling jumbo everywhere and see if that's the issue, but haven't got around to it. I can transfer to my SSD cache at 600MB/s, so I doubt it's the 10G network though.

 

For whatever reason, my server seems to suck at multitasking despite having a quad core E3 v2 Xeon. I blame the fact I am using archive drives, they seem to bring the entire system to a chug whenever they are being wrote too. This doesn't happen with my new 12TB drives.

Edited by tyrindor

Share this post


Link to post

Since no one else has come back with any thoughts, I would suggest that you run the experiment that you proposed.  Your results would be most interesting and might give a bit of insight into the effect of transfer speeds by various settings of network parameters when using 10Gb hardware.  

Share this post


Link to post
13 hours ago, Frank1940 said:

Since no one else has come back with any thoughts, I would suggest that you run the experiment that you proposed.  Your results would be most interesting and might give a bit of insight into the effect of transfer speeds by various settings of network parameters when using 10Gb hardware.  

 

I have 3 12TB drives preclearing right now, so it's gonna be awhile before I can look into this further. 

 

I really don't get what I am seeing right now though. I am now getting piss poor 10-20MB/s reads from the majority of my drives as I try to transfer stuff off them. These drives are connected to 3 different SAS2LP controllers on a Supermicro X9SCM motherboard with a E3 v2 Xeon processor. I want to blame the fact they are "archive" drives, but archive drives shouldn't have any issues with reads and I never had these issues in the 2 years of owning them.

 

RAM usage is 13%, CPU usage is <10%. Writes are fine (160MB/s) and my 3 Preclears are going 270MB/s each, so I don't think it's an interface/SAS controller issue. I am unable to test parity speeds until my preclears are done, but the last 16 month I finished with an average of ~180MB/s, so I doubt anything has changed.

 

These slow reads seem to also be happening when using a program like Syncthing, which to my knowledge wouldn't be using SMB at all. I'm puzzled, seems like the problem comes and goes and only affects certain things.

Edited by tyrindor

Share this post


Link to post
On 03/04/2018 at 10:22 PM, tyrindor said:

Do you think I should disable jumbo frames on my 1G connection then?

 

Yes. Unless absolutely everything on your gigabit LAN both supports them and has them enabled.

 

Share this post


Link to post

Regarding your other questions - can't say without seeing your diagnostics.

 

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.