1eob Posted July 19, 2022 Share Posted July 19, 2022 ive recently gone 10gbps and i noticed my transfer speed is limmited to 300MB/s i looked in the overview when transfering and i noticed its only utilizing 1 thread and that thread is pinned ive enabled smb multi channel but that hasnt changed anything Quote Link to comment
trurl Posted July 19, 2022 Share Posted July 19, 2022 3 minutes ago, 1eob said: transfer speed is limmited to 300MB/s Are you writing to SSD pool (cache) or to the array? Quote Link to comment
1eob Posted July 19, 2022 Author Share Posted July 19, 2022 13 minutes ago, trurl said: Are you writing to SSD pool (cache) or to the array? writing to ssd cache pool Quote Link to comment
1eob Posted July 19, 2022 Author Share Posted July 19, 2022 here is how the array and pool is setup Quote Link to comment
trurl Posted July 19, 2022 Share Posted July 19, 2022 That screenshot doesn't tell whether or not you are writing to cache. Looks like most writes have gone to disk1 since those stats were reset. Are you writing to a cache:yes or cache:prefer share? Quote Link to comment
1eob Posted July 19, 2022 Author Share Posted July 19, 2022 cache is set to yes Quote Link to comment
JorgeB Posted July 24, 2022 Share Posted July 24, 2022 Samba is single threaded, still should be able to get much more then 300MB/s, assuming fast enough devices and a relatively recent high clock CPU. Quote Link to comment
1eob Posted April 13 Author Share Posted April 13 Hi. So ive verified a few things now I am indeed writing to the cache and im getting speeds of about 400MB/s I have confirmed that the cache ssd is well able to do over 1000MB/s using the diskspeed app running on docker in unraid Ive also confirmed my connection to the server is at 10gb/s using the locally hosted speedtest app running on unraid My own pc has a drive that is well capable of over 10gb/s speeds Quote Link to comment
1eob Posted April 13 Author Share Posted April 13 What ive also noticed is that in the unraid after the file transfer is done going around 400MB/s a few seconds later i can see the cache ssd suddenly writing at 2.9GB/s Which does not reflect the file transfer speeds at all? Quote Link to comment
JorgeB Posted April 14 Share Posted April 14 That can happen when the data is being cached and then flushed faster, if the device can handle it, try transferring to an exclusive share (or disk share), that will bypass FUSE and should be noticeably faster. Quote Link to comment
1eob Posted April 14 Author Share Posted April 14 Hi how would i go about doing that Quote Link to comment
1eob Posted April 14 Author Share Posted April 14 Nvm i figured it out will test now Quote Link to comment
1eob Posted April 14 Author Share Posted April 14 Alright sadly hasnt changed anything Quote Link to comment
electron286 Posted April 14 Share Posted April 14 Are you using QEMU to create a virtual array that UNRAID is then in turn working with? not sure of any real advantages there, but a bunch of potential issues if there is need for an array rebuild. Is the cache drive getting direct access with Unraid? it looks like it is. In my tests, many of the NVME drives at the elevated temperature you show in your earlier picture, will slow down. If temperature related, additional heat sinking and/or air flow cooling on the NVME may resolve your problem. Quote Link to comment
JorgeB Posted April 14 Share Posted April 14 6 hours ago, 1eob said: Alright sadly hasnt changed anything How are you making the transfer? Exclusive share should be noticeably faster. Quote Link to comment
1eob Posted April 15 Author Share Posted April 15 I make my cache drive under disk shares to be able to be accessed via smb and then connected my pc to it and tried a file transfer Quote Link to comment
JorgeB Posted April 16 Share Posted April 16 Post a screenshot from Windows explorer during a large file transfer using the disk share and the same for the user share. Quote Link to comment
1eob Posted April 17 Author Share Posted April 17 (edited) Alright here is the two Downloads to cache is the disk share (This is directly to the cache ssd) Downloads to Leo is the user share (This share is using the ssd for cache) The user share speeds are not consistent for an example it going at around 300 is on the lower side then "normal" Here is another run on the user share Edited April 17 by 1eob Quote Link to comment
JorgeB Posted April 18 Share Posted April 18 It does appear to be a little better, but you may be network limited, since the starting speed is the same, run a single stream iperf test in both directions. Quote Link to comment
1eob Posted April 18 Author Share Posted April 18 (edited) Im definitely not network limited from what ive tried. I ran ran a openspeedtest server on unraid and this is what i got for the connection to my pc I will try to attempt the iperf test Edited April 18 by 1eob Quote Link to comment
1eob Posted April 18 Author Share Posted April 18 (edited) Well then. Its around the same speed im getting in the file transfer. What i dont get is why is it so low. Everything is 10g So the windows terminal is my pc to the server Linux terminal is server to my pc Also i hope iperf is default on single stream im not too familiar with this program Edited April 18 by 1eob Quote Link to comment
JorgeB Posted April 18 Share Posted April 18 32 minutes ago, 1eob said: Also i hope iperf is default on single stream im not too familiar with this program It is, and it does suggest the network is the problem, you should get close to line speed when all is well, 9Gb/s+ is usually considered a good result. Quote Link to comment
1eob Posted April 18 Author Share Posted April 18 Yea im quite confused now on what the issue could be. my setup is a bit different. I have unraid running in proxmox. (i also ran the same iperf test directly on the host) and i achieved slightly better results but nowhere near what it should be Quote Link to comment
Solution 1eob Posted April 18 Author Solution Share Posted April 18 (edited) Alright. I found out the issue.... My switch is apparently really bad for some reason. Ive just setup a temporary direct link between the two servers i have (skipping the switch entirely) And sure enough 9.24Gbits/sec Alright then. Sorry for wasting your time. Big thank you for you helping me through this and finally coming to some sort of conclusion on my end Its almost comical to see the difference with one link being direct and the other link being through the switch Edited April 18 by 1eob 1 Quote Link to comment
electron286 Posted April 20 Share Posted April 20 Nice update. Happy to see you found the bottleneck. As newer standards and higher speeds come out on all the hardware, from busses on the motherboards, and faster Ethernet standards, and faster SSDs, etc., sadly it is common to eventually hit unexpected bottlenecks. With Ethernet, sometimes some level of incompatibility pops up between brands of chipsets in the controllers, and even switches, and frame sizes used, and cache memory used at all the connection points. Sometimes a large improvement can be seen by either increasing the frame size, or reducing the frame size. Jumbo frames have some advantages in potentially reducing overhead, but sometimes actually slow down data transfers due to the specific cache designs on various chipsets. Also, if there are data errors, a resent smaller packet is resent much quicker than a much larger jumbo frame, which can quickly result in much slower overall transfer speeds using jumbo frames if everything is not running 100% correctly. Notice the retries in your transfers. Something is definitely not happy. Even with your direct connection between computers you are seeing some retries for some reason. Cable types and terminations are of course a first place to check. Sadly even factory built cables can at times be defective, and not meet the standards. Your results remind me of when I was first switching over to gigabit on my network. Overall it seemed pretty great versus 100Mbit, but the numbers were not as I expected. I was seeing excessive retries going through my switches, even using switches from multiple vendors, yielded similar results. I found two cables that were more problematic than the rest, so I swapped them out. I also banned jumbo frames from my network, which also helped quite a bit. About 6 months later things started getting worse, one computer after another started dropping down to 100Mbit speed. So I bought some Intel Gb network cards to replace the Realtek ones for additional testing. With no other changes, network speeds were better than any of my prior tests using the Realtek devices. I bought more Intel NICs in bulk to get better pricing, and began to swap out all the rest of the Realtek Gb NICs. I did not switch them out immediately, but at first replaced the Realtek NICs as the performance died on them. In the end, about 60% of the Realtek NICs died in about 18 months from initial installation. Then I completed pulling out the rest and replaced with the Intel NICs. I have run fully INTEL NICs since. This past year, I have finally bought some MBs that have Realtek 2.5Gb NICs built-in. I will be adding a 2.5Gb switch soon to actually test stress them. Going back to Jumbo Frames, unless your full network supports them, they can be problematic, even transitioning to routers and modems can be an issue and source for lost performance. At best, you would typically only be able to get a 10% overall speed boost with Jumbo Frames, which for the sake of data integrity and quicker packet recovery when needed, as well as better compatibility, just does not make sense to me, to even think of enabling Jumbo ever again. If I am setting up a system just for top speeds, sure, but for everyday use, no way. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.