Did some Samba aio enable/disable tests, no time to do them using many different controllers, but wanted do a least use a couple, so tested on a an xfs formatted array with reconstruct write enable connect to an LSI 3008 and a 3 SSD btrfs raid0 pool connected to the Intel SATA ports, used robocopy to copy two folders to/from an NVMe device in my Win10 desktop, first a folder with 6 large files totaling around 26GB, second one with 25k small to medium files totaling 25GB, tried to remove RAM cache from the equation as much as possible.
I only ran each test once, an average of 3 runs would be more accurate but didn’t have the time, these are the speeds reported by robocopy after the transfers were done:
I was only going to be using user shares for testing but because of the very low write speed for small files I decided to repeat each test using disk shares:
Not what I testing here but still interesting results, shfs has a very large overhead with small files, especially for writes, not something I usually do in my normal usage but perhaps one of the reasons people with time machine backups are seeing very low speeds? I believe those use lots of small files.
As for Samba aio, I don’t see any advantage in having aio enable, if anything it appears to be generally a little slower, add to that it apparently performs much worse for some and that I still don’t trust that the btrfs issue is really fixed, and that it might come back on future releases, I would leave it disable by default, of course different hardware/workloads can return different results, but if anyone wants to enable it’s really easy using the Samba extra options.