ptr727

Members
  • Posts

    139
  • Joined

  • Last visited

Report Comments posted by ptr727

  1. If you follow the details of the thread, my research shows the the problem is NOT SMB, the problem is the Unraid FUSE filesystem.

    Mount your SMB share under a user share (Unraid FUSE code in play), poor performance.

    Mount your SMB share under a disk share (no Unraid code), normal performance.

  2. 2 minutes ago, limetech said:

    Which tool?  Please provide link.  Also debugging these kinds of issues is a 2-way street.  This is not the only issue we have to work on.

    I have spent a significant amount of time and effort chasing the SMB performance problem (I immediately noticed the slowdown when I switched from W2K16 to Unraid), so I do think my side of the street has been well worn.

    I referenced the tool I wrote to automate the tests in the last three of my blog posts where I detail my troubleshooting, and every one posted in this thread.

    For completeness, here again: https://github.com/ptr727/DiskSpeedTest

  3. Ok, but why would a SMB option make a difference if it looks as if it is a "shfs" write problem, i.e. SMB over disk performance was good, SMB over user share performance was bad, read performance always good?

     

    I'll give it a try (case sensitive SMB will break Windows), but I won't be able to test until next week.

     

    I believe it should be easy to reproduce the results using the tool I've written, so I would suggest you profile the code yourself, rather than wait for my feedback to the experiments.

  4. Thank you for the info.

     

    Would it then be accurate to say the read/write and write performance problem shown in the ongoing SMB test results are caused by shfs?

     

    Can you comment on why the write performance is so massively impacted compared to read, especially since the target is the cache and needs no parity computations on write, i.e. can be read through and write through?

  5. Some more googling, and I now assume when you say shfs you are referring to Unraid's fuse filesystem, that happens to be similarly named to better known shfs, https://wiki.archlinux.org/index.php/Shfs.

     

    A few questions and comments:

    - Is unraid's fuse filesystem proprietary, or open source, or GPL and we can request source?

    - For operations hitting just the cache, no parity, no spanning, why the big disparity between read and write for what should be a noop?

    - Logically cache only shares should bypass fuse and go direct to disk, avoiding the performance problem.

    - All appdata usage on cache only will suffer from the same IO write performance problem as observed via SMB. Unless users explicitly change appdata for containers from mnt/user/appdata to mnt/cache/appdata.

     

  6. See:

    https://github.com/ptr727/DiskSpeedTest

    https://github.com/Microsoft/diskspd/wiki/Command-line-and-parameters

    -Srw means disable local caching and enable remote write though (try to disable remote caching).

     

    What I found is that Unraid SMB is much worse at mixed readwrite and write compared to Ubuntu on the same exact hardware, where the expectation is a similar performance profile.

     

    Are you speculating that the problem is caused Fuse?