slimshizn Posted October 19, 2018 Share Posted October 19, 2018 (edited) Current setup I have two servers. Main and backup. Main: Quote Model: Custom M/B: Supermicro - X9DRi-LN4+/X9DR3-LN4+ CPU: Intel® Xeon® CPU E5-2660 v2 @ 2.20GHz HVM: Enabled IOMMU: Enabled Cache: 640 kB, 2560 kB, 25600 kB Memory: 64 GB Multi-bit ECC (max. installable capacity 1536 GB) Network: bond0: fault-tolerance (active-backup), mtu 9000 eth0: 10000 Mb/s, full duplex, mtu 9000 eth1: not connected eth2: not connected eth3: not connected eth4: not connected Kernel: Linux 4.18.14-unRAID x86_64 OpenSSL: 1.1.0i Backup: Quote Model: Custom M/B: GIGABYTE - GA-7TESM CPU: Intel® Xeon® CPU X5687 @ 3.60GHz00000000000000000 HVM: Enabled IOMMU: Enabled Cache: 32 kB, 1024 kB, 12288 kB Memory: 64 GB Single-bit ECC (max. installable capacity 192 GB) Network: bond0: fault-tolerance (active-backup), mtu 9000 eth0: 10000 Mb/s, full duplex, mtu 9000 eth1: not connected eth2: not connected Kernel: Linux 4.18.14-unRAID x86_64 OpenSSL: 1.1.0i Using a 10Gbe switch, the 16-XG, I'm able to connect everything and use as normal. Main has 2 SSD's in Raid1, Backup has a single SSD in XFS. So here's the problem: I can transfer files throughout my network up to 112MB/s no problem for the most part, when on my windows PC trying to move files from Main --> Backup the speed is sitting at 65MB/s typically. Going on krusader on my main server and connecting to backup it's roughly 70-75MB/s. Most I've been able to get is roughly 112MB/s while transferring a 20GB file, no more. Steps I've taken: Changed MTU to 9000 on both servers, changed 10G switch to Jumbo frame, changed RX/TX buffers to 8192 (Max), rebooted each server and switch. I've ran iperf on both servers and got this FULL 10G speeds. Just to make sure SSD was working correctly I moved a file from a UD SSD to it and got 500+MB/s no problem, from server to server via network, nothing over 112MB/s. Need help here! tower-diagnostics-20181018-2141.zip Edited October 19, 2018 by slimshizn Quote Link to comment
JorgeB Posted October 21, 2018 Share Posted October 21, 2018 Use iperf to test network bandwidth. Quote Link to comment
slimshizn Posted October 21, 2018 Author Share Posted October 21, 2018 I listed that as steps I've taken. Quote Link to comment
JorgeB Posted October 21, 2018 Share Posted October 21, 2018 53 minutes ago, slimshizn said: I listed that as steps I've taken. Sorry, I read that when you originally posted but forgot, re-reading your post this not clear to me: Is your desktop also on 10GbE, and if you transfer directly from one Unraid server to the other using the cache pool on both do you get the expected speeds? Quote Link to comment
slimshizn Posted October 21, 2018 Author Share Posted October 21, 2018 10 hours ago, johnnie.black said: Sorry, I read that when you originally posted but forgot, re-reading your post this not clear to me: Is your desktop also on 10GbE, and if you transfer directly from one Unraid server to the other using the cache pool on both do you get the expected speeds? No problem, thank you for the reply. My desktop is not 10GbE. If I transfer from one server to the other using the cache pool I get 65-75MB/s as the fastest. I've tried all types of files and different sizes. Quote Link to comment
JorgeB Posted October 22, 2018 Share Posted October 22, 2018 12 hours ago, slimshizn said: My desktop is not 10GbE. If I transfer from one server to the other using the cache pool I get 65-75MB/s as the fastest This is using your desktop to do the transfer or directly from one server to another, i.e. without the desktop being involved? If you're using your desktop, and the desktop isn't 10GbE, those are perfectly normal speeds, since the data goes from one server to the desktop and then from the desktop to another server, limited by the gigabit NIC on the desktop, and worse, since the same NIC is receiving and sending data simultaneously it will never even reach full gigabit speeds. Quote Link to comment
slimshizn Posted October 22, 2018 Author Share Posted October 22, 2018 Are you sure it works like that because prior to my 10gbe upgrade using my desktop I was getting 112 MB/s transferring from one server to the next with no hiccups. Using krusader on server A to transfer to B the speeds are exactly the same as if I were using the desktop. Quote Link to comment
JorgeB Posted October 22, 2018 Share Posted October 22, 2018 It can depend on the hardware used (NICs and switch) but it's perfectly normal, you should transfer directly from one server to another, you can use for example the Unassigned Devices plugin to mount one server share(s) on the other and use mc or krusader to transfer directly. Quote Link to comment
slimshizn Posted October 24, 2018 Author Share Posted October 24, 2018 On 10/22/2018 at 5:57 AM, johnnie.black said: It can depend on the hardware used (NICs and switch) but it's perfectly normal, you should transfer directly from one server to another, you can use for example the Unassigned Devices plugin to mount one server share(s) on the other and use mc or krusader to transfer directly. I've done that, and direct transfer from one to another is still slow. Just to make sure it wasn't the cables, I switched over to a SFP + module and fiber, all brand new. Tested speeds and still SLOW. Direct transfer speed is around 65MB/s. Quote Link to comment
JorgeB Posted October 24, 2018 Share Posted October 24, 2018 7 minutes ago, slimshizn said: Direct transfer speed is around 65MB/s If iperf uses the full 10GbE bandwidth and even when doing a direct transfer you can only copy at 65MB/s that would imply the network isn't the problem, either read speed on source or more likely the write speed to the destination server is the problem, unless you can transfer quicker if you transfer from the desktop to the same server, but if that's the case your problem doesn't make much sense, you'll need to do some testing to rule things out. Quote Link to comment
slimshizn Posted October 24, 2018 Author Share Posted October 24, 2018 I've also tested the read and write speeds of both SSD's on the source and destination. They are both at the speeds they should be at. Prior to having any 10Gbe, and using 1G ethernet I would always get 112-117MB/s transfer speeds. Since then I have changed out both NIC's and cables. Quote Link to comment
JorgeB Posted October 24, 2018 Share Posted October 24, 2018 Yeah, but like I said it doesn't make sense that you're getting full 10GbE with iperf, that's 1GB/s (assuming that the test was done with a single transfer, not with like 10 simultaneous transfers), and then can't even transfer at 100MB/s, whatever the problem I'm sorry but I'm out of ideas. Quote Link to comment
slimshizn Posted October 24, 2018 Author Share Posted October 24, 2018 I did some more testing. I added an iso to each server, then sent each to a old qnap that I use for appdata backup and important files. From my servers to the qnap Its staying at 60-65MB/s. From qnap back to the servers it's 60-65MB/s. From ANY of the three it's 112-115MB/s to my desktop, or desktop to any server it's 112-115MB/s. My desktop only has a 1Gb NIC. Not sure what's going on with that. Quote Link to comment
slimshizn Posted October 24, 2018 Author Share Posted October 24, 2018 (edited) I used UD to connect to the other server via SMB protocol, opened krusader and transferred a file off of the cache to the destination server. Fastest I could get to was 124MB/s. Edit, went up to 149MB/s then back down. Edited October 24, 2018 by slimshizn Quote Link to comment
slimshizn Posted October 25, 2018 Author Share Posted October 25, 2018 Anyone else have any idea on what's going on here? 1 Quote Link to comment
mgutt Posted June 19, 2020 Share Posted June 19, 2020 (edited) On 10/25/2018 at 6:24 PM, slimshizn said: Anyone else have any idea on what's going on here? Did you solve the issue? My transfer speed isn't as low as yours, but I think it should be better. What I did: a) If found in this thread a hint how to install iostat. So I did the following to install it: cd /mnt/user/Marc wget http://slackware.cs.utah.edu/pub/slackware/slackware64-14.2/slackware64/ap/sysstat-11.2.1.1-x86_64-1.txz upgradepkg --install-new sysstat-11.2.1.1-x86_64-1.txz b) Started iostat as follows: watch -t -n 0.1 iostat -d -t -y 5 1 c) I downloaded through windows a huge file that is located on my SSD cache and as we can see its loaded from the NVMe as expected: d) Then I downloaded a smaller file to test the RAM cache. The first download was delivered through the NVMe: e) The second transfer shows nothing (file was loaded from RAM / SMB RAM cache): This leaves some questions: 1.) Why is the SSD read speed 80 MB/s slower than the RAM although its able to transfer much more than 1 GB/s? 2.) Why is the maximum around 500 MB/s? Note: My PCs network status, the Unraid dashboard and my switch show all 10G as link speed. Edited June 19, 2020 by mgutt Quote Link to comment
JorgeB Posted June 19, 2020 Share Posted June 19, 2020 25 minutes ago, mgutt said: This leaves some questions: 1.) Why is the SSD read speed 80 MB/s slower than the RAM although its able to transfer much more than 1 GB/s? 2.) Why is the maximum around 500 MB/s? Run a single stream iperf test to check network bandwidth, but it's normal that transferring to RAM is a little faster due to lower overhead/latency. Quote Link to comment
mgutt Posted June 19, 2020 Share Posted June 19, 2020 (edited) 4 hours ago, johnnie.black said: Run a single stream iperf test to check network bandwidth, but it's normal that transferring to RAM is a little faster due to lower overhead/latency. No need to test iperf. I enabled the FTP server, opened Filezilla, set parallel connections to 5 and choosed 5 huge files from 5 different disks: By that I was able to reach 900 MB/s in total: Similar test, this time I started multiple downloads through Windows Explorer (SMB): This time my wife viewed a movie through Plex, so results could be a little bit slower than possible, but Filezilla was still able to download >700 MB/s so shouldn't be a huge difference. So whats up with SMB? I checked the used SMB version through Windows PowerShell and it returns 3.1.1 Edited June 19, 2020 by mgutt Quote Link to comment
mgutt Posted June 20, 2020 Share Posted June 20, 2020 I checked the smb.conf and it contains a wrong setting: [global] # configurable identification include = /etc/samba/smb-names.conf # log stuff only to syslog log level = 0 syslog = 0 syslog only = Yes # we don't do printers show add printer wizard = No disable spoolss = Yes load printers = No printing = bsd printcap name = /dev/null # misc. invalid users = root unix extensions = No wide links = Yes use sendfile = Yes aio read size = 0 aio write size = 4096 allocation roundup size = 4096 # ease upgrades from Samba 3.6 acl allow execute always = Yes # permit NTLMv1 authentication ntlm auth = Yes # hook for user-defined samba config include = /boot/config/smb-extra.conf # auto-configured shares include = /etc/samba/smb-shares.conf aio write size can not be 4096. The only valid values are 0 and 1: https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html Quote The only reasonable values for this parameter are 0 (no async I/O) and 1 (always do async I/O). But I tested both an it did not change anything. I tested this solution without success, too. Other Samba settings I tested: # manually added server multi channel support = yes #block size = 4096 #write cache size = 2097152 #min receivefile size = 16384 #getwd cache = yes #socket options = IPTOS_LOWDELAY TCP_NODELAY SO_RCVBUF=65536 SO_SNDBUF=65536 #sync always = yes #strict sync = yes #smb encrypt = off server multi channel support is still active because it enables multiple tcp/ip connections: Side note: After downloading so many files from different disks I found out that my RAM has a maximum smb transfer speed of 700 MB/s. But if I download from multiple disks the transfer speed is capped at around 110 MB/s (and falling under 50 MB/s after it starts reading from Disk). All CPU cores have an extreme high usage (90-100%) if two simultaneous smb transfers are running. Even one transfer produces a lot CPU load (60-80% on all cores). Now I'll try to setup NFS in Windows 10. Quote Link to comment
mgutt Posted June 20, 2020 Share Posted June 20, 2020 Ok, last test for today. Now enabled NFS in Windows 10 as explained here and downloaded from 3 disks (the 4th disk was busy through UnBalance). As you can see I was able to hit 150 MB/s per drive without problems: Conclusion: Something is really wrong with SMB in Unraid 6.8.3. Quote Link to comment
JorgeB Posted June 20, 2020 Share Posted June 20, 2020 6 hours ago, mgutt said: Conclusion: Something is really wrong with SMB in Unraid 6.8.3. If that was true everyone would have that issue, and most, including myself can read/write at normal speeds using SMB, though there still might be some setting/configuration that doesn't work correctly for every server/configuration. Quote Link to comment
JorgeB Posted June 20, 2020 Share Posted June 20, 2020 Also, 12 hours ago, mgutt said: No need to test iperf. But why not do it? There's a reason we ask for it, if you get bad results with a single stream iperf test you likely also get bad results with a single SMB transfer. 12 hours ago, mgutt said: set parallel connections to 5 and choosed 5 huge files This isn't useful at all since it's not a single stream. Another thing you can test is user share vs disk share, user shares always have some additional overhead, and some users are much more affected then other by it, so try transfer to/from a disk share and compare. Quote Link to comment
mgutt Posted June 20, 2020 Share Posted June 20, 2020 (edited) 24 minutes ago, johnnie.black said: This isn't useful at all since it's not a single stream. single download through ftp from the SSD cache: If the file is located in the RAM it boosts up to 1 GB/s. Edited June 20, 2020 by mgutt Quote Link to comment
JorgeB Posted June 20, 2020 Share Posted June 20, 2020 25 minutes ago, johnnie.black said: But why not do it? There's a reason we ask for it Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.