Unraid not using more then 1 thread while writing to the array


Go to solution Solved by 1eob,

Recommended Posts

ive recently gone 10gbps and i noticed my transfer speed is limmited to 300MB/s i looked in the overview when transfering and i noticed its only utilizing 1 thread and that thread is pinnedimage.png.962e2ddf1d1276d0600e5d0df38d2149.png ive enabled smb multi channel but that hasnt changed anything image.png.e98e7d8600d840ee1bc9a94cb7c46f98.png

Link to comment
  • 1 year later...

Hi. So ive verified a few things now

I am indeed writing to the cache and im getting speeds of about 400MB/s
I have confirmed that the cache ssd is well able to do over 1000MB/s using the diskspeed app running on docker in unraid
Ive also confirmed my connection to the server is at 10gb/s using the locally hosted speedtest app running on unraid

 

My own pc has a drive that is well capable of over 10gb/s speeds

Link to comment

What ive also noticed is that in the unraid after the file transfer is done going around 400MB/s a few seconds later i can see the cache ssd suddenly writing at 2.9GB/s

Which does not reflect the file transfer speeds at all?

 

image.png

Link to comment

Are you using QEMU to create a virtual array that UNRAID is then in turn working with?  not sure of any real advantages there, but a bunch of potential issues if there is need for an array rebuild.

 

Is the cache drive getting direct access with Unraid?  it looks like it is.  In my tests, many of the NVME drives at the elevated temperature you show in your earlier picture, will slow down.  If temperature related, additional heat sinking and/or air flow cooling on the NVME may resolve your problem.

Link to comment
Posted (edited)

Im definitely not network limited from what ive tried. I ran ran a openspeedtest server on unraid and this is what i got for the connection to my pc
I will try to attempt the iperf test
image.png.1a353300f44cccf74453eb1b4bbf561f.png

Edited by 1eob
Link to comment
Posted (edited)

Well then. Its around the same speed im getting in the file transfer. What i dont get is why is it so low. Everything is 10g

 

So the windows terminal is my pc to the server

image.png.474adca67871c75a02afebff866ee521.png

Linux terminal is server to my pc

 

 

image.png

 

Also i hope iperf is default on single stream im not too familiar with this program

Edited by 1eob
Link to comment
32 minutes ago, 1eob said:

Also i hope iperf is default on single stream im not too familiar with this program

It is, and it does suggest the network is the problem, you should get close to line speed when all is well, 9Gb/s+ is usually considered a good result.

Link to comment

Yea im quite confused now on what the issue could be. my setup is a bit different. I have unraid running in proxmox. (i also ran the same iperf test directly on the host) and i achieved slightly better results but nowhere near what it should be

Link to comment
  • Solution
Posted (edited)

Alright. I found out the issue.... My switch is apparently really bad for some reason. Ive just setup a temporary direct link between the two servers i have (skipping the switch entirely) And sure enough 9.24Gbits/sec Alright then. Sorry for wasting your time.

Big thank you for you helping me through this and finally coming to some sort of conclusion on my end


Its almost comical to see the difference with one link being direct and the other link being through the switch

 

image.png

Edited by 1eob
  • Like 1
Link to comment

Nice update.  Happy to see you found the bottleneck.

 

As newer standards and higher speeds come out on all the hardware, from busses on the motherboards, and faster Ethernet standards, and faster SSDs, etc., sadly it is common to eventually hit unexpected bottlenecks.  With Ethernet, sometimes some level of incompatibility pops up between brands of chipsets in the controllers, and even switches, and frame sizes used, and cache memory used at all the connection points.  Sometimes a large improvement can be seen by either increasing the frame size, or reducing the frame size.  Jumbo frames have some advantages in potentially reducing overhead, but sometimes actually slow down data transfers due to the specific cache designs on various chipsets. 

 

Also, if there are data errors, a resent smaller packet is resent much quicker than a much larger jumbo frame, which can quickly result in much slower overall transfer speeds using jumbo frames if everything is not running 100% correctly.

 

Notice the retries in your transfers.  Something is definitely not happy.  Even with your direct connection between computers you are seeing some retries for some reason.  Cable types and terminations are of course a first place to check.  Sadly even factory built cables can at times be defective, and not meet the standards.

 

Your results remind me of when I was first switching over to gigabit on my network.  Overall it seemed pretty great versus 100Mbit, but the numbers were not as I expected.  I was seeing excessive retries going through my switches, even using switches from multiple vendors, yielded similar results.  I found two cables that were more problematic than the rest, so I swapped them out.  I also banned jumbo frames from my network, which also helped quite a bit.  About 6 months later things started getting worse, one computer after another started dropping down to 100Mbit speed.  So I bought some Intel Gb network cards to replace the Realtek ones for additional testing.  With no other changes, network speeds were better than any of my prior tests using the Realtek devices.  I bought more Intel NICs in bulk to get better pricing, and began to swap out all the rest of the Realtek Gb NICs.  I did not switch them out immediately, but at first replaced the Realtek NICs as the performance died on them.  In the end, about 60% of the Realtek NICs died in about 18 months from initial installation.  Then I completed pulling out the rest and replaced with the Intel NICs.  I have run fully INTEL NICs since. 

 

This past year, I have finally bought some MBs that have Realtek 2.5Gb NICs built-in.  I will be adding a 2.5Gb switch soon to actually test stress them.

 

Going back to Jumbo Frames, unless your full network supports them, they can be problematic, even transitioning to routers and modems can be an issue and source for lost performance.  At best, you would typically only be able to get a 10% overall speed boost with Jumbo Frames, which for the sake of data integrity and quicker packet recovery when needed, as well as better compatibility, just does not make sense to me, to even think of enabling Jumbo ever again.  If I am setting up a system just for top speeds, sure, but for everyday use, no way.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.