Jump to content

tjb_altf4

Members
  • Posts

    1,388
  • Joined

  • Last visited

  • Days Won

    1

Report Comments posted by tjb_altf4

  1. On 6/28/2020 at 10:41 AM, tjb_altf4 said:

    My usage was 5TB over 20 days (BTRFS RAID0 cache on 6.8.3).

    After moving docker.img to array and back again, my daily writes in the last 7 days has dropped from an average of 0.25TB to 0.14TB (daily data units from 442,229 to 254,563).

     

    A near 50% write reduction.

    Having run the remount command last week, the last 5 days has seen the daily data units write average drop further to 155,949.

    Good result for now.

     

  2. On 6/28/2020 at 10:41 AM, tjb_altf4 said:

    My usage was 5TB over 20 days (BTRFS RAID0 cache on 6.8.3).

    After moving docker.img to array and back again, my daily writes in the last 7 days has dropped from an average of 0.25TB to 0.14TB (daily data units from 442,229 to 254,563).

     

    A near 50% write reduction.

    Average is still  pretty consistent with this.

    On 6/28/2020 at 10:49 AM, TexasUnraid said:

    Have you tried the remount command mentioned above?

    I've run the remount command now, after 30 min the only noticeable writes are from a test media file (4.5GB), which increased writes only by the same amount.  Loop is only sitting at 160MB, pretty good as I have a dozen or so very active dockers.

    I'll continue collecting data points for the next few days to see how it goes.

  3. 2 hours ago, itimpi said:

    I guess this raises a few questions worth thinking about:

    • Is there a specific advantage to having the docker image file formatted internally as BTRFS or could an alternative such as XFS help reduce the write amplification without any noticeable change in capabilities.

    XFS isn't a supported backend for Docker, overlay2 seems to be the other usual choice.

    • Like 1
  4. 32 minutes ago, Marshalleq said:

    Also, I have always assumed this is broken and not specific to me - so raising it here as it's probably a good time to do so, but could be wrong.  Specifically:

     

    I assume there should not be 'no balance found' on BTRFS

    Wording has always been a little misleading, it means there is no balance job running

  5. 2 hours ago, limetech said:

     

    Max for unRAID array is 30 devices (2 parity, 28 data), for pools also 30 devices; max number of pools 35 - that ought to be enough for anybody 😁

     

    I can see perhaps wanting more than 35 pools, but seriously, what's the use case for larger arrays/pools?

     

    No that's great, I thought we were still working with an overall system cap of 30 devices that we would need to split across array and multiple pools.

    Thanks for the clarification!

  6. Quote

    A future release will include support for multiple "unRAID array" pools.  We are also considering zfs support.

    Outstanding on both counts!

    With all these storage options underway and multi language, Unraid is stepping up to the next level big time.

     

    @limetech Any thoughts on expanding current drive cap for pools/array, or changing licencing tiers to allow for more drives?

  7. 4 hours ago, limetech said:

    227 comments(!) in that topic.  is there a tldr?

    • write amplification in cache, most users are reporting a sizable uptick in writes across different cache configurations
    • exacerbated further by heavy write dockers such as plex
    • I believe it's related to copy on write setting of the docker.img, or at least how its mounted
    • @johnnie.black mentioned he was aware a fix was on its way for next release? 
    • Like 1
  8. A word of caution...

    Nested virtualization using a (virtualised) windows host is still not supported for current architecture AMD CPUs, this is a Microsoft limitation.

    You will likely brick your windows VM trying, so make sure you backup your VM if you want to give this a go.

     

  9. FYI the problem is the lack of graphics drivers, but is also impacted by certain cards behaviour with the default display driver.

    Tested with a large 4K screen (possibly a contributing factor).

     

    My experience:

    • older amd gpu: garbage experience with microtext on a tiny resolution
    • new nvidia gpu: usable resolution (but not full) with expected text size, OK experience.
    • new nvidia gpu + nvidia unraid build (aka having gpu drivers): full resolution, expected text size, great experience


    I would expect new AMD with drivers to also be great, but new AMD with default driver might be a crappshoot.

  10. I used to have horrid network share under win10 years ago, under smb or even when navigating shares on other windows machines.

    Similar symptoms to that described by someone above, +30 seconds of waiting.

    Trick was to untick the 2 checkbox (privacy) settings under explorer > folder options > general

    That might help in this instance, if you'd like to try.

     

    For what it's worth I have multiple folders with thousands of folders in each and never see drastic slowdowns,.

    Note that although I have cache directories installed, it's been disabled for months.

     

    win10.folderoptions.jpg

    note: pic is one from net, but note check boxes under Privacy should be unticked.

  11. My 2c.

     

    Try rebooting your router, automatic DNS resolution has been hit and miss for many since 6.7 and this seems to wake up the stack... for me anyway.

    Alternatively try setting a static DNS server such as google's 8.8.8.8 as has been suggested by others for similar issues.

     

    My Docker was hung for 10 minutes, as was any sign of dockers on the dashboard, but had no issues connecting directly to their web interfaces.

    Gave the router a reset and it all came good.

     

  12. On 4/17/2019 at 1:49 AM, phbigred said:

    Are we just seeing a theme with Ryzen? Might be an unidentified bug.

    If it is, it's probably tied to NUMA mode as I don't see these problems on my 1950x in UMA mode.

    Given the difference between htop and the dash was the inclusion of iowait time, I wonder if processes are idling waiting for resources they can't physically access due to enforced cpu node separation and isolation.

     

    Just a theory anyway.

×
×
  • Create New...