Jump to content

metabubble

Members
  • Posts

    8
  • Joined

  • Last visited

Posts posted by metabubble

  1. On 6/26/2021 at 1:37 AM, hugenbdd said:

    The way it's currently scripted, no.  It pipe's in the find command to Unraid's binary mover.

     

    In the future?  Possibly, there are some changes to Unraid that I'm waiting to be implemented that will track the mover's progress.  Once that is implemented I may be able to do something like this.  It's also possible a "tee" command could be used, but I don't think that will satisfy the "after moved" requirement you mentioned.

     

    Curios, what is the use case to run a script after every file?

    I am trying to debug IO starvation whenever the mover is running. Even with nice and ionice to the lowest prio, as soon as the mover runs, smb craps out, plex will not serve any media, basically what everyone is complaining. I was wanting to see if running "sync" (or sync; sleep 1) after every file (causing it to pause till the cache is flushed) will improve the situation by giving plex a little bit of IO time.

     

    I have narrowed the problem down to the mover just writing to write cache, which becomes saturated and then it is written out as a blocking operation. A hook into the mover would give me control to pause it briefly whenever reads occur, so I could readahead into the read buffer...

  2. On 6/15/2021 at 8:41 PM, itimpi said:

    Not quite the question you asked but is there any reason to not use the WireGuard VPN that is built into recent UnRaid releases.    That can be used regardless of whether the array is started or not.

    I have a homogenous OpenVPN everywhere setup. I have not yet looked into wireguard configuration, nor do I know if I can use it on a couple slightly restricted machines. I will look into it though.

  3. I am certain I have pinned down the problem. If you have large amounts of ram, and because vm.dirty_ratio is 20%, the mover is first writing to ram. For a long time, since you have lots of ram. It fills, and for some reason, the OS decides to write the buffer in a blocking manner to the disc. Blocking means no other IO on that drive until it is done. It starves all other processes of reads. What I have found out is, that when you reduce vm.dirty_ratio to 1%, you get more frequent "short" reads inbetween the writes, thusly giving plex a chance to serve the content. You will however impact writing performance if you set it too low.

     

    RIP Ram. Unraid needs to find a way to bypass the write cache for the mover, or needs to find a different solution to resolve the blocking IO problem. Maybe even try a different scheduler, one that always prioritises reads over writes - if that is possible.

    • Like 1
    • Thanks 2
  4. Seeing that binhex-plex is made for unraid, I have no high hopes for my question, but I think it is nevertheless worth asking.

     

    For 2 weeks now, binhex-plex does not start on my Synology Diskstation anymore, because it uses kernel 3.10.105. Log says "unsupported kernel", which sucks. I stayed on the previous version for a while before grabbing the official plex docker, which also produces satisfactory results.

     

    However: I am also using binhex-delugevpn and binhex-sabnzbd and I am fearful that if I update these, they will fail with the same error since my kernel is outdated (and I have no way of updating it). Now I wonder if there is any workaround for this, so I can keep using these 2 amazing dockers?

  5. I have observed that with "Force Turbo" enabled, at the end of the move, it returns to regular write mode without all data being written, so the last couple megabytes always write in read + overwrite mode.

     

    I wonder if that could be improved by calling 'sync' before reverting turbo mode.

×
×
  • Create New...