ryann Posted August 1, 2020 Share Posted August 1, 2020 Hey everyone, I've had this issue for months (although I put it on the back burner for a bit). I have 10Gb networking setup at my home. I can easily saturate the link between my normal desktop and laptop using iperf3 w/ -P4: Unraid however... I cannot get it to work even close to 10Gb with iperf3 w/ -P4: Details: - I tried setting my MTU to 9014 across the board. It works between my laptop and desktop, so they're not the issue. This goes through my US-16-XG switch, so that's also not the issue. - If I set my MTU to anything over 1500, I can no longer acess the Web UI or my drives. Sounds like jumbo frames aren't supported... but they should be (Intel X722). - I found, by default, using ethtool, that my RX/TX buffers are 512. Setting them to 4096 doesn't change anything. I'm somewhat confused on the RX Jumbo max is 0... but looking online, that doesn't seem all too uncommon. I know my Unraid is unregistered (fun story there... my USB drive died this morning during a reboot to change the interfaces around). Running on a freebie drive until I get a new trusted USB drive tomorrow.) Other info: nazaretianrack-diagnostics-20200801-1224.zip Quote Link to comment
ryann Posted August 1, 2020 Author Share Posted August 1, 2020 (edited) Forgot to mention... I get about the same speeds testing with a VM running on the server to the server's iperf3 server. So it's something local to the server itself. Edited August 1, 2020 by ryann Quote Link to comment
ryann Posted August 1, 2020 Author Share Posted August 1, 2020 New experiement. I ran iperf3 on the VM running on the server. I get about 10GB throughput... so the hardware seems to work. Quote Link to comment
ryann Posted August 1, 2020 Author Share Posted August 1, 2020 It affects file transfers as well: From Desktop to VM running on server (on cache drive): From Desktop to SMB share running on server (also on cache drive): Quote Link to comment
Vr2Io Posted August 2, 2020 Share Posted August 2, 2020 You have another 10G NIC (82599ES), if test by it does any different ? You also setup with multiple VLAN, how about if test it in simple network first. Quote Link to comment
ryann Posted August 2, 2020 Author Share Posted August 2, 2020 40 minutes ago, Benson said: You have another 10G NIC (82599ES), if test by it does any different ? You also setup with multiple VLAN, how about if test it in simple network first. Yea, that was the NIC I first noticed it on. When I found out today that the 82599ES didn't support SR-IOV, I switched to the X722. I fought the 82599ES for several days though. Great idea trying on a simple network first. The VM and my PCs are all on the same VLAN. The server is on a separate network. And that's exactly what is going on... I can get much closer to my 10Gbit ceiling. So now I need to figure out how to fix that... Thanks so much! It's been driving me nuts. 1 Quote Link to comment
ryann Posted August 2, 2020 Author Share Posted August 2, 2020 Last update. My network runs on top of Ubiquiti Unifi equipment. For the 10Gb network, I have a US-XG-16 and a Dream Machine Pro. The US-XG-16 is a layer 2 switch, so no routing table lives on the switch. To cross the VLAN, traffic has to go back to the gateway (Dream Machine Pro). Quote Link to comment
Vr2Io Posted August 2, 2020 Share Posted August 2, 2020 (edited) 3 hours ago, ryann said: To cross the VLAN, traffic has to go back to the gateway (Dream Machine Pro). You means some test going to UDMP ( Inter VLAN routing ) ? That may cause slow. I have 82599ES and other two different 10G NIC and US-16-XG, no performance problem with Unraid. Iperf3 got ~8Gbps ( single session ) test result but SMB have 1GB/s. Edited August 2, 2020 by Benson Quote Link to comment
SNIPER_X Posted August 3, 2020 Share Posted August 3, 2020 I may be having the exact same issue. I cannot get above 1.2Gbit (122Mbyte/sec) on my all 10GB network. It's like I'm able to saturate a 1Gbit link but it's all running on 10! iPerf tests can saturate the 10Gbit links, but try to write to disks (even with SSD cache) and i hit the 1gb wall. Quote Link to comment
ryann Posted August 3, 2020 Author Share Posted August 3, 2020 (edited) 13 hours ago, SNIPER_X said: I may be having the exact same issue. I cannot get above 1.2Gbit (122Mbyte/sec) on my all 10GB network. It's like I'm able to saturate a 1Gbit link but it's all running on 10! iPerf tests can saturate the 10Gbit links, but try to write to disks (even with SSD cache) and i hit the 1gb wall. That's where I'm at now. What's funny is I can setup a share in my VM running on the server and transfer at near 10Gb speeds, but using SMB, I'm hitting around 1.2Gbps just as you. iperf3 w/ a single thread I can hit around 9Gbps. Multi-streams at 9.8Gbps. So I think my target should be close to 9Gbps for SMB transfers (w/ NVME SSDs) Edited August 3, 2020 by ryann Quote Link to comment
Xaero Posted August 3, 2020 Share Posted August 3, 2020 One important thing is that on Windows it's 9014 for Jumbo frames while on Linux the same is 9000 MTU. Setting 9014 MTU on Linux may very well break network connectivity. I would try setting 9000 MTU. Additionally, disable the SMBv1/2 support in Unraid - one of the reasons you are seeing reduced performance on Unraid's SMB implementation is likely because SMB Multichannel is not being utilized with the legacy support enabled. Your Windows 10 VM is using SMB Multichannel and so is your laptop/desktop. Of course iperf should not be impacted by this - but your SMB transfers will be. Quote Link to comment
ryann Posted August 3, 2020 Author Share Posted August 3, 2020 Just now, Xaero said: One important thing is that on Windows it's 9014 for Jumbo frames while on Linux the same is 9000 MTU. Setting 9014 MTU on Linux may very well break network connectivity. I would try setting 9000 MTU. Additionally, disable the SMBv1/2 support in Unraid - one of the reasons you are seeing reduced performance on Unraid's SMB implementation is likely because SMB Multichannel is not being utilized with the legacy support enabled. Your Windows 10 VM is using SMB Multichannel and so is your laptop/desktop. Of course iperf should not be impacted by this - but your SMB transfers will be. I'll admit... this isn't my day job, so I don't have a bunch of expertise. Just relying on what I can read. I actually have my MTU set to 1500. I found issues getting some of my Dockers running with Jumbo Frames enabled (don't recall if it was 9000 or 9014). I know the lower MTU can affect speeds, but everything I've read so far says Jumbo Frames won't really fix the transfer issue. Let me figure out how to disable SMBv1/2 support. That does sound promising. Quote Link to comment
ryann Posted August 3, 2020 Author Share Posted August 3, 2020 Looks like I can add this to my SMB extra parameters? #disable SMB1 for security reasons [global] min protocol = SMB2 Quote Link to comment
Xaero Posted August 3, 2020 Share Posted August 3, 2020 (edited) You may also need to add: # ENABLE SMB MULTICHANNEL server multi channel support = yes I've not heard of issues using 9000 MTU with docker yet, but I also cannot run 9000 MTU with my current network configuration (my ISP provided modem will not connect above 1500 MTU) There will be significant performance implications if you use MIXED MTU. if everything is 1500 or everything is 9000 then things should be "more or less the same" outside of a large number (thousands) of large sequential transfers (gigabytes) - where the larger MTU will start to pull ahead. With MIXED MTU the problem is that any incoming packets must be fragmented when sent to a client that isn't using the larger MTU. This wastes a ton of resources on the switch or router to accomplish. On the flipside, when the smaller MTU client sends a packet it will use the smaller MTU and the potential overhead savings of the large frame is lost, though this is not as bad of an impact. EDIT: Removed flawed testing, will update later with proper testing again. I'm not on a mixed MTU network atm so I can't actually test this haha. Edited August 3, 2020 by Xaero Quote Link to comment
ryann Posted August 4, 2020 Author Share Posted August 4, 2020 # SMB Performance & Security Tweaks min protocol = SMB2 server multi channel support = yes aio read size = 1 aio write size = 1 This is the best combo so far. Read speeds are around 850MB/s. Writes are still pretty low at 320MB/s. Read from share to desktop (not perfect, but I can accept it): Write from desktop to share: I retested transferring a file from my Desktop PC (Windows 10) the VM (Windows Server 2016) running on Unraid and hit my network bottleneck: Read: Write: Quote Link to comment
ChatNoir Posted August 4, 2020 Share Posted August 4, 2020 Did you try disableing both v1 and v2 as suggested by @Xaero ? I would guess min protocol = SMB3 both I am no expert. ^^ Quote Link to comment
Xaero Posted August 4, 2020 Share Posted August 4, 2020 Also note that mixed MTU affects inbound (write) performance more than outbound (read) performance. The reason is pretty simple; inbound 9000 MTU packets must be split (fragmented) by the network appliance (switch) before they are transmitted to the client. This nets a rather substantial loss in throughput per packet, and results in increased latency as well. Where as packets transferred by the lower MTU client are less than the frame size, and rather than having to combine or split them, they are just sent with zeroes padding to the right. Latency isn't increased at all, but there is a small (yet measurable) loss in throughput. The performance definitely improved substantially just being able to take advantage of multichannel. Quote Link to comment
ryann Posted August 4, 2020 Author Share Posted August 4, 2020 I did try SMB3, but didn't see any performance improvement. That said, I misunderstood the original comment. I have it set to a min of SMB3 now. With Jumbo Frames, I enabled 9014 on the server and 9014 on my desktop. I don't see any change in performance. Like I mentioned in my last experiment, transferring files to the and from the VM from and to my desktop behaves as I would expected a 10Gbps connection to work. It's reading & writing to the same SSD on my server as is shared in SMB. Quote Link to comment
Pourko Posted August 18, 2020 Share Posted August 18, 2020 On 8/3/2020 at 12:05 PM, ryann said: What's funny is I can setup a share in my VM running on the server and transfer at near 10Gb speeds, but using SMB, I'm hitting around 1.2Gbps I've been struggling with that for years! Last week the bright idea came to me to look into ACL in Samba. Turned out that was enabled by default. I disabled it, and BOOM! For the first time in my life I saw Unraid saturate the network adapter! Try adding this in the global section of smb-extra.conf, and let us know how it goes. nt acl support = No Quote Link to comment
ryann Posted August 18, 2020 Author Share Posted August 18, 2020 2 hours ago, Pourko said: I've been struggling with that for years! Last week the bright idea came to me to look into ACL in Samba. Turned out that was enabled by default. I disabled it, and BOOM! For the first time in my life I saw Unraid saturate the network adapter! Try adding this in the global section of smb-extra.conf, and let us know how it goes. nt acl support = No I'm not sure if I successfully got this enabled or not. I took my array down, Docker hung, so it never went fully down. I had to force power-cycle the server. Now it's going through a parity check and I found out the Recycle Bin modified the SMB config so now I have two [global] sections and I cannot change that until the parity check finishes. If two [global] sections is valid, then I don't see any improvement. It's either about the same or a little slower than before. Quote Link to comment
Pourko Posted August 18, 2020 Share Posted August 18, 2020 26 minutes ago, ryann said: I'm not sure if I successfully got this enabled or not. I took my array down, Docker hung, so it never went fully down. I had to force power-cycle the server. Now it's going through a parity check and I found out the Recycle Bin modified the SMB config so now I have two [global] sections and I cannot change that until the parity check finishes. I don't know anything about Docker. I was talking about the stock Samba service that comes with Unraid. It uses two config files -- one is recreated in RAM every time the server boots, the other (i.e., smb-extra.conf) is located in the config folder on your boot flash drive. There's no recycle bins in any of that. Quote Link to comment
ryann Posted August 18, 2020 Author Share Posted August 18, 2020 Just now, Pourko said: I don't know anything about Docker. I was talking about the stock Samba service that comes with Unraid. It uses two config files -- one is recreated in RAM every time the server boots, the other (i.e., smb-extra.conf) is located in the config folder on your boot flash drive. There's no recycle bins in any of that. Sorry for the confusion... Recycle Bin is a Unraid Plugin: Docker hung as I took my array down and because I had to do a hard reset, my array got flagged as dirty. You can edit smb-extra.conf from the /boot/config, or from The Settings->SMB page. Where I'm thinking I may have a bad config is where the Plugin updated and seemd to goof up my config a bit: veto files = /._*/.DS_Store/ [global] # SMB Performance & Security Tweaks min protocol = SMB2 server multi channel support = yes aio read size = 1 aio write size = 1 use sendfile = yes nt acl support = no # min receivefile size = 16384 # socket options = IPTOS_LOWDELAY TCP_NODELAY IPTOS_THROUGHPUT SO_RCVBUF=131072 SO_SNDBUF=131072 #vfs_recycle_start #Recycle bin configuration [global] syslog only = Yes syslog = 0 logging = 0 log level = 0 vfs:0 #vfs_recycle_end It's easy enough to fix, but I cannot try it until the Parity check finishes (probably sometime tomorrow morning). Quote Link to comment
Pourko Posted August 18, 2020 Share Posted August 18, 2020 1 minute ago, ryann said: You can edit smb-extra.conf from the /boot/config, or from The Settings->SMB page. Right. Quote Link to comment
halfelite Posted August 18, 2020 Share Posted August 18, 2020 (edited) On 8/2/2020 at 7:00 AM, ryann said: Last update. My network runs on top of Ubiquiti Unifi equipment. For the 10Gb network, I have a US-XG-16 and a Dream Machine Pro. The US-XG-16 is a layer 2 switch, so no routing table lives on the switch. To cross the VLAN, traffic has to go back to the gateway (Dream Machine Pro). Before you totally go around changing settings on everything. Did you update your UDM to the newest beta. It was/is a known issue with vlan traffic being routed out the 10gp sfp port to cause slowdowns and issues and in general terrible performance enough that most were running the UDM pro over the gigabit line to the xg-16 to at least get 1gb vlan routing. Edited August 18, 2020 by halfelite Quote Link to comment
DivideBy0 Posted August 19, 2020 Share Posted August 19, 2020 (edited) UDMP backplane is a 2G if I remember correctly? So I don't think you can push 10G through. Not 100% sure but I think they changed the backplane speed between board revision 3 and 5. Now the board revision is a 8. May wanna look into that on the Unifi forum? I may be way off Edited August 19, 2020 by johnwhicker Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.