Jump to content

20_100

Members
  • Posts

    16
  • Joined

  • Last visited

Posts posted by 20_100

  1. 35 minutes ago, theruck said:

    try to list the directory with different command if it take so much time too as it might be caused by ls itself. try find or use ls without the sorting - ls -U should be quicker

    also coloring of files in bash can have an impact on the output

    it can be also filesystem dependent so you can have different results using XFS

     

    Just tried with -U, exactly same behavior

  2. 26 minutes ago, theruck said:

    try to list the directory with different command if it take so much time too as it might be caused by ls itself. try find or use ls without the sorting - ls -U should be quicker

    also coloring of files in bash can have an impact on the output

    it can be also filesystem dependent so you can have different results using XFS

     

    Listing 100 000 files from a single drive works instantly on the same server, as long as I use the command on /mnt/disk_ instead of /mnt/user

  3. 23 minutes ago, theruck said:

    try to list the directory with different command if it take so much time too as it might be caused by ls itself. try find or use ls without the sorting - ls -U should be quicker

    also coloring of files in bash can have an impact on the output

    it can be also filesystem dependent so you can have different results using XFS

     

     

    The same very high latency behaviors happen when FTP services try to list the directory, when I try listing the directory with SMB, sshfs, NFS, the ls command, the "get-childitem" command of a powershell running in docker. I didn't find a way to list this directory without issue.

     

    It also happens when I try to just insert a new file, with the above methods.

  4. 1 hour ago, Frank1940 said:

    One thing to try is to spin all the disks up on the array first.

     

    Question:  How many items are being listed when you are experiencing a "almost 1 minute' delay?   My observation is that this output listing from the ls command is also sorted alphabetically.  You should realize that this is going to take longer and longer as the number of items increase.  (This time increase depends on which sort routine that was implemented by the Linxu/Unix(?) developer who first wrote this command  as some sort routines are much faster than others--- particularly as the number of items increase!!!)

    100ish results. I can't imagine it takes time to sort that 🙂

  5. Hi,

     

    I have a growing number of files, around 500.000 , on a single share, spread around 7 disks (XFS).

    These files are between 30Mb and 120Mb.

    They are in the same directory.

    I have severe performance issues.

     

    I would like to discuss a local, specific issue, and the generic/general best practices to improve the situation.

     

    From a local unraid terminal, the same ls command, which returns a small subset of the files, while being instant on any individual drive

    ls mnt/disk{disknumber}/Myshare/Thefolder/*AAA*.*

     

    takes forever (almost 1 minute) when I execute on the merged file system

     

    ls mnt/user0/Myshare/Thefolder/*AAA*.*

     

    The issue is the same when it comes to insert  new file.

     

    What I want to explore in this thread is 

    • Is this normal/to be expected.
    • Are there pieces of documentation that cover this topic
    • Are there figures documented in Unraid that
    • Does it depend on the number of drives
    • Does it depend on how files are spread on the disks
      • if so, how does each share allocation method influence this topic
    • The file names are random. Would using sub-directories help? 
      • if so, how would directory split level influence this topic
    • as CPU, RAM, I/O usage don't seem to react, what tools would you use to investigate the bottleneck

     

  6. Yes, I confirm.

     

    My setup so you can compare:

    I mounted /mnt/user/ to Z: ; to do so I registered the Windows Service as described in the virtiofs documentation.

    Nothing special at all on the unraid side, except maybe that I ran the "New permissions" unraid tool to reset my permissions when I started struggling with the issues we are discussing.

     

    The VirtioFsSvc windows service hosts virtiofs.exe under the SYSTEM user.

     

    When you check a share permissions, do you have Everybody and a broken sid S-1-5-0 ? 

     

     

    mstsc_qAM6BUiHKd.png

    • Like 1
  7. Looks like the permission issue it might be related to :

     

    https://github.com/virtio-win/kvm-guest-drivers-windows/issues/722

    and

    https://github.com/virtio-win/kvm-guest-drivers-windows/issues/660

     

    which seems to be closed and merged, so it might be shipped with the same next release.

     

    edit : I confirm that I can create and edit files in new directories when I use a local account instead of Active Directory account

     

    edit 2 : Apparently, all my issues are solved when using a local windows account. 

     

    Sidenote : I even used this local Windows user to create a windows network share, and could give permissions to AD users. That hardly worked when I tried doing that from an AD account, and there were tons of issues. All works, feels very stable at the moment.

    I want to see if sharing this virtiofs drive instead of using unraid SMB share solves permission issues and some performance issues I have been struggling with for years.

     

    edit 3 : see the rest of the post below, as some issues seem to persist

  8. Hi

     

    I use Unraid to backup files for a web app. 

    During some peaks, hundreds of files can be queued to be processed.

    The app downloads the file to a temp storage (unrelated to Unraid), then parses and analyses it, then copy it to the cloud for storage and to unraid for backup.

    4 service workers actually work concurrently to process these queued files.

     

    These files are between 50MB to 150MB

     

    During the peaks, I have two problems

     

    - Copying the files from the service worker drive to the share seems to work, but the target file is actually empty (0 byte). That one problem doesn't even happen only during peaks. It happens frequently even when the web app traffic is slow and there are not that many files to process.

     

    - It gets worse and the share become unresponsive 

     

    It happens multiple times a day, and I have many files which are 0 byte. They aren't originally 0, and the files stored in the primary location in the cloud are not empty. Only the unraid copies are empty.

     

    One episode happened recently between 3:13 and 3:25 today

     

    image.png.240611bd06f0bf42c68c152154ace468.png

     

     

    The thing is for the first empty file, the one named AC0DDDF... , the app workers log don't show an error : the program thinks the file has been correctly copied to its destination, the unraid share.

     

    For some of the other files, the worker crashed during the copy :

    image.png.dd8b50ef50fc84b870b9904dba983c4b.png

     

    After a few minutes the shares become responsive again.

     

    A few remarks :

    - I tried with and without using a cache pool. There are empty files on the cache pool drives if I use one.

    - 2 workers (of the 4 total) run on a VM hosted on the same Unraid instances. 2 other workers are on a different servers. The 4 workers have the same issues.

    - I have been having this issue for 2 years now

     

     

     

     

     

     

    sunraid1-diagnostics-20211025-2059.zip

×
×
  • Create New...