nomisco Posted January 4, 2023 Share Posted January 4, 2023 (edited) I know there are several posts on here about this, and a few appear to have solutions, but none of those apply to my situation. I used to be able to max my 1Gbps network connection when copying from my Windows 10 machine over SMB to unRAID. It would max out at ~ 110MB/s. Now it will max at a consistent half of that - about 52MB/s. I don't know when it first happened, but I'm guessing it's been slow for about 6 months or so. Don't want to spam this thread with screenshots (so I've added them as an archive). I've tested: Windows to unRAID using iPerf with both tested as client and server. Results are > 930Mb/s Benchmarks of the drives and controller (LSI 9211-8i). Spinners and SSDs. The slowest drive is around 100MB/s at its slowest point. Windows to Windows machine on the same network ~ 110MB/s. SSD on both ends. Tested with cache SSD and direct to array with no change. Tested in safe mode. Tested with all dockers stopped. Played with network settings, turbo write etc, but I can see that the disks and network don't appear to be the bottleneck. I've recently set up Lancache for gaming downloads. Those cache only on an SSD and may contain a lot of small files, but even installing to both machines I am seeing a consistent 75 - 80MB/s. Screens archive also shows that array parity check speed is consistently ~150MB/s average. Test file is 60GB ISO. Attached screens.zip shows the screenshots I took whilst running the tests. Thanks screens.zip unraid-diagnostics-20230104-1924.zip Edited January 4, 2023 by nomisco Quote Link to comment
JorgeB Posted January 4, 2023 Share Posted January 4, 2023 Start by doing a single stream iperf test in both directions. Quote Link to comment
nomisco Posted January 4, 2023 Author Share Posted January 4, 2023 Is that not what I've done here? Quote Link to comment
nomisco Posted January 5, 2023 Author Share Posted January 5, 2023 As I understand, using the iPerf switch -P would have tested with parallel streams. As I didn't use this, I'm assuming that these tests are with a single stream. Quote Link to comment
JorgeB Posted January 5, 2023 Share Posted January 5, 2023 There are a lot of retries in one of the directions suggesting a network problem . Quote Link to comment
nomisco Posted January 5, 2023 Author Share Posted January 5, 2023 (edited) 6 hours ago, JorgeB said: There are a lot of retries in one of the directions suggesting a network problem . I don't believe that the number of retries is of concern when the network is being saturated. 1Gb/s can max out at about 80,00+ packets per second and the network was minimally in use elsewhere; skpe, online gaming etc (which would have prioritised packets). I'll subsitiute a segment of the network with a long cable which will byass a couple of switches, but because there's a 50% drop in throughput I don't have much hope for that. Just to add, iPerf shows that the network performance is largely as expected, and I see better when using a different protocol through lancache, so it points to an SMB problem to me. Edited January 5, 2023 by nomisco Added additional thought Quote Link to comment
JorgeB Posted January 5, 2023 Share Posted January 5, 2023 19 minutes ago, nomisco said: I don't believe that the number of retries is of concern when the network is being saturated. Might not be, but it's still not normal, usually iperf tests are clean, same as your test in the other direction. Quote Link to comment
nomisco Posted January 15, 2023 Author Share Posted January 15, 2023 Bypassed the entire segment of the network, so Win10 > switch > unraid. No change. SMB still about half the data rate it used to be. I can transfer from the win10 machine to another on the network at ~110MB/s. The iPefr tests show good performance. SMB is the problem. Quote Link to comment
JorgeB Posted January 16, 2023 Share Posted January 16, 2023 Enable turbo write and transfer directly from Windows to the array, to the disk with most free space, what speed to you get? Quote Link to comment
nomisco Posted January 16, 2023 Author Share Posted January 16, 2023 (edited) The disk settings are set to reconstruct write. It is writing to the largest available space, which is about 2TB of space on a 4TB disk. The write speed is still ~50MB/s from the client to unraid, but it appears to buffer in the unraid memory, then dump large chunks to disk before waiting for some buffer to fill on unraid, they repeating. Hopefully the below images give you some idea of the behaviour. The disk writes in the top image are to the parity and array disk. Cache disk (SDD) is not used during test. Edited January 16, 2023 by nomisco Add information Quote Link to comment
JorgeB Posted January 16, 2023 Share Posted January 16, 2023 Could be SMB, but I can write to Unraid at 1GB/s, normally no one has problems getting line speed with SMB over gigabit, could still be network related despite the clean iperf results, I would try two things, using a different NIC if available and/or creating a new Unraid flash drive and restore only the key and disk assignments, in case it's something to to with your Unraid install. Quote Link to comment
nomisco Posted January 16, 2023 Author Share Posted January 16, 2023 41 minutes ago, JorgeB said: Could be SMB, but I can write to Unraid at 1GB/s, normally no one has problems getting line speed with SMB over gigabit, could still be network related despite the clean iperf results, I would try two things, using a different NIC if available and/or creating a new Unraid flash drive and restore only the key and disk assignments, in case it's something to to with your Unraid install. There may be a perfect storm of something in my case with the many recent changes to the SMB implementation. It most certainly used to saturate the Gb network during SMB transfers. I shall do a fresh install in the next day or two and report back. Thanks for your help. Quote Link to comment
JorgeB Posted January 16, 2023 Share Posted January 16, 2023 There are some known SMB issues, specially for Mac users, also still some known issues when working with lots of small files, either browsing or copying, though it's much better with v6.11 vs before, transferring large files like I assume you are there should not be any SMB issues, at least not with gigabit. Quote Link to comment
RutgerJonaker Posted April 2, 2023 Share Posted April 2, 2023 (edited) I struggle with the problem. Doing Proxmox backups to a Unraid SMB share is very slow. My proxmox backup share is on a NVME drive so thats no problem. Running Unraid on a Odroid H3+ with 32GB ram. Before Unraid I runned Truenas on the same hardware and had no problem. Seems its sowmthing wrong with SMB in Unraid or driver for Realtek Nics ! Edited April 2, 2023 by RutgerJonaker Add information Quote Link to comment
JorgeB Posted April 2, 2023 Share Posted April 2, 2023 Try using a disk share instead. Quote Link to comment
RutgerJonaker Posted April 4, 2023 Share Posted April 4, 2023 This fixed my problem Quote Link to comment
denishay Posted September 3, 2023 Share Posted September 3, 2023 On 1/16/2023 at 5:39 PM, JorgeB said: Could be SMB, but I can write to Unraid at 1GB/s, normally no one has problems getting line speed with SMB over gigabit, could still be network related despite the clean iperf results, I would try two things, using a different NIC if available and/or creating a new Unraid flash drive and restore only the key and disk assignments, in case it's something to to with your Unraid install. I do not mean to rain on anyone's parade, but for me and at least 4 other friends on Unraid, it is not the case. Unraid is consistently "slow" on SMB transfers, with anything between 30MB/s to 80MB/s max on a Gigabit LAN. If you refer to this Unraid vs TrueNAS comparison on Youtube above, you will easily see that the SAME HARDWARE and network performs 3 times as fast (yes, even after he re-did the same tests using ZFS on Unraid too. Now... I love Unraid, and performance is still "OK" most of the time for what I use it for (mostly media streaming), but no, sorry, SMB peformance on Unraid is notoriously bad for a reason. Acknowledging that would probably be a step towards having that thouroughly investigated rather than trying to find out what's wrong on the "user side" pretty much every time. I'm not saying that things cannot originate from the user's setups, but for many of us with many different systems and sometimes different servers, it is evry apparent that anything to and from Unraid SMB shares *is* slow. Much slower than pretty much anything else. It's a good thing it has loads of other qualities, but if SMB performance could get some love, it would make Unraid so much better! 1 Quote Link to comment
itimpi Posted September 3, 2023 Share Posted September 3, 2023 5 minutes ago, denishay said: I do not mean to rain on anyone's parade, but for me and at least 4 other friends on Unraid, it is not the case. Unraid is consistently "slow" on SMB transfers, with anything between 30MB/s to 80MB/s max on a Gigabit LAN. In nearly all cases of this it is due to how the system is configured rather than SMB itself. For instance those speeds are typical of writing to the main array if you do not have Turbo Write enabled as documented here in the online documentation. I agree this is not as fast as we would wish but it is not SMB that is the limiting factor Quote Link to comment
denishay Posted September 5, 2023 Share Posted September 5, 2023 On 9/3/2023 at 1:58 PM, itimpi said: In nearly all cases of this it is due to how the system is configured rather than SMB itself. For instance those speeds are typical of writing to the main array if you do not have Turbo Write enabled as documented here in the online documentation. I agree this is not as fast as we would wish but it is not SMB that is the limiting factor OK. Just to be clearer, it is due to the way SMB is configured/implemented/whatever on Unraid by default. Most of us have not played with those settings. Using anything else than SMB for transfers or any other system than Unraid with SMB all pretty much max out the Gigabit network. That was the whole point of my post above. I guess I didn't make it clearer: if there are better settings to be made, why are they not set by default on new setups? In my specific case, I tried modifying the SMB settings, but only got worse results (15Mb/s or less with Multichannel for ex.). As I said earlier, I have made my peace with that. I don't care and don't really need more. But claims that there are "no problems" with the way Unraid configures SMB by default and trying to have *users*/ go through a complex game of trials and errors is just not right. If "optimal" settings exist, why are they not set by default and/or some warnings issued? Quote Link to comment
KnifeFed Posted September 18, 2023 Share Posted September 18, 2023 On 9/5/2023 at 11:53 AM, denishay said: OK. Just to be clearer, it is due to the way SMB is configured/implemented/whatever on Unraid by default. Most of us have not played with those settings. Using anything else than SMB for transfers or any other system than Unraid with SMB all pretty much max out the Gigabit network. That was the whole point of my post above. I guess I didn't make it clearer: if there are better settings to be made, why are they not set by default on new setups? In my specific case, I tried modifying the SMB settings, but only got worse results (15Mb/s or less with Multichannel for ex.). As I said earlier, I have made my peace with that. I don't care and don't really need more. But claims that there are "no problems" with the way Unraid configures SMB by default and trying to have *users*/ go through a complex game of trials and errors is just not right. If "optimal" settings exist, why are they not set by default and/or some warnings issued? You didn't read the comment correctly. @itimpi explained that it's not an issue with SMB itself, but rather how your Unraid system is configured (settings unrelated to SMB): mainly if Turbo Write is enabled or not. In addition to that it's due to how the main array and parity works in Unraid (compared to e.g. RAID) which doesn't really allow for super high write speeds (especially with Turbo Write disabled). Quote Link to comment
Confused Posted April 5 Share Posted April 5 I got frustrated with the low transfer speed and decided to just use terminal to ssh into the box and scp everything directly. 1800 files, 20mb/file for a total of 23GB in size. Avg speed 114.55MB/sec Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.