ptr727
-
Posts
139 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by ptr727
-
-
For the time being I gave up on Unraid fixing this, I moved to Proxmox with ZFS: Removed link at the request of @limetech
-
Seems highly unlikely that this is a LSI controller issue.
My guess is the user share fuse code locks all IO while waiting for a disk mount to spin up.
-
24 minutes ago, ap90033 said:
So is the issue resolved?
No
-
Still happens on my 6.8.2, had to disable WSD.
-
I tried with DirectIO yes, and DirectIO yes plus case insensitive yes, no difference (see attached results).
Given that a disk share over SMB showed good performance, I am sceptical that it is a SMB issue, my money is on a performance problem in the shfs write path.
- 1
-
2 minutes ago, limetech said:
Which tool? Please provide link. Also debugging these kinds of issues is a 2-way street. This is not the only issue we have to work on.
I have spent a significant amount of time and effort chasing the SMB performance problem (I immediately noticed the slowdown when I switched from W2K16 to Unraid), so I do think my side of the street has been well worn.
I referenced the tool I wrote to automate the tests in the last three of my blog posts where I detail my troubleshooting, and every one posted in this thread.
For completeness, here again: https://github.com/ptr727/DiskSpeedTest
-
Ok, but why would a SMB option make a difference if it looks as if it is a "shfs" write problem, i.e. SMB over disk performance was good, SMB over user share performance was bad, read performance always good?
I'll give it a try (case sensitive SMB will break Windows), but I won't be able to test until next week.
I believe it should be easy to reproduce the results using the tool I've written, so I would suggest you profile the code yourself, rather than wait for my feedback to the experiments.
-
Thank you for the info.
Would it then be accurate to say the read/write and write performance problem shown in the ongoing SMB test results are caused by shfs?
Can you comment on why the write performance is so massively impacted compared to read, especially since the target is the cache and needs no parity computations on write, i.e. can be read through and write through?
-
Some more googling, and I now assume when you say shfs you are referring to Unraid's fuse filesystem, that happens to be similarly named to better known shfs, https://wiki.archlinux.org/index.php/Shfs.
A few questions and comments:
- Is unraid's fuse filesystem proprietary, or open source, or GPL and we can request source?
- For operations hitting just the cache, no parity, no spanning, why the big disparity between read and write for what should be a noop?
- Logically cache only shares should bypass fuse and go direct to disk, avoiding the performance problem.
- All appdata usage on cache only will suffer from the same IO write performance problem as observed via SMB. Unless users explicitly change appdata for containers from mnt/user/appdata to mnt/cache/appdata.
-
So, you are absolutely right, a "disk" share's performance is on par with that of Ubuntu.
Can you tell me more about "shfs"?
As far as I can google shfs was abandoned in 2004, replaced by SSHFS, but I don't understand why a remote ssh filesystem would be used, or are we taking vanilla libfuse as integrated into the kernel?
-
Testing now, about an hour left to go.
Did you try to reproduce the results I see, instructions should be clear: https://github.com/ptr727/DiskSpeedTest
-
See:
https://github.com/ptr727/DiskSpeedTest
https://github.com/Microsoft/diskspd/wiki/Command-line-and-parameters
-Srw means disable local caching and enable remote write though (try to disable remote caching).
What I found is that Unraid SMB is much worse at mixed readwrite and write compared to Ubuntu on the same exact hardware, where the expectation is a similar performance profile.
Are you speculating that the problem is caused Fuse?
-
You are welcome to run a test on your own setup for comparison, I describe my test method.
By my testing the Unraid numbers really are bad, I attached my latest set of data.
DiskSpeedResult_Ubuntu_Cache.xlsx
Btw, 500MBps is near 4Gbps, are you running 10Gbps ethernet?
-
I have now tested Unraid vs. W2K19 VM, and Ubuntu VM, and now Ubuntu bare metal on the same hardware.
There is no reason why Unraid should be slower on the cache drive, but the ReadWrite and Write performance is abysmal.
https://blog.insanegenius.com/2020/02/02/unraid-vs-ubuntu-bare-metal-smb-performance/
-
Changed Status to Solved
-
Thx, it works.
-
Looks to me like a disk IO problem in Unraid, not a Samba problem.
https://blog.insanegenius.com/2020/01/18/unraid-vs-ubuntu-smb-performance/
- 1
-
Cache performance in v6.8.1 is worse than v6.7.2.
See: https://blog.insanegenius.com/2020/01/16/unraid-smb-performance-v6-7-2-vs-v6-8-1/
- 2
-
This an annoyance that should take a but a few minutes to fix, please.
Slow SMB performance
in Stable Releases
Posted
If you follow the details of the thread, my research shows the the problem is NOT SMB, the problem is the Unraid FUSE filesystem.
Mount your SMB share under a user share (Unraid FUSE code in play), poor performance.
Mount your SMB share under a disk share (no Unraid code), normal performance.