I know it's not exactly the best performer, indeed I don't expect to run Docker, VMs or anything else more than NFS and SMB on it.
However I just need it to reach some decent read speeds, as it did with no problem before I started using Unraid on it. I thought the overhead would be only on write speed and wouldn't make such a big difference on read speeds as well.
The slow down is on the server itself, copying a file from /mnt/user to /dev/null brings shfs usage to 100% and doesn't reach more than 30MB/s. Copying from the /mnt/cache to /dev/null doesn't seem make shfs go to 100% but the read speed is the same.
CPU governor is currently set to On Demand and the frequency seem to be scaling correctly, and I haven't changed any network setting yet because the problem appears also locally: I first noticed it when copying data from a disk to another from Unraid, indeed importing the data from the old drives to the new ones took me a very long time, because the speed never reached more than 30MB/s despite the drives being able to reach 200MB/s and more. At first I assumed it was Unraid limiting write speeds to the array...
Regarding NFS, I'm still getting stale file handle errors, but the frequency went from once every few minutes to once a day. Again, I'm attaching diagnostics....
Should I give up on Unraid and switch back to a less resource hungry OS?
unraid-diagnostics-20220515-1925.zip