Dephcon

Members
  • Posts

    601
  • Joined

  • Last visited

  • Days Won

    1

Report Comments posted by Dephcon

  1. 2 hours ago, nickp85 said:

    Shift your plex transcoding to memory by putting it in /tmp!  I did this not long ago, create a Ramdisk on boot in /tmp with 4 GB of space and let Plex use it for transcoding.  Can put more if you want it.  There is a post on the forum about it somewhere.  Great to reduce wear and tear on disk.

    i used to do it in ram when i had 32GB, when i upgraded i only had 16GB of DDR4 available so its a bit tight now.

  2. Just wanted to circle back to this now that my testing is over and I've finalized my caching config (for now).

     

    Previously I was using a 4 SSD BTRFS RAID10 for "cache", using 4K partitioning

     

    Now I have a 2 SSD BTRFS RAID1, 1M partitioned for array cache and docker-xfs.img and a XFS formatted pool device to use for scratch-space.  currently this includes plex transcoding and duplicacy cache.  I might move my usenet download/extract over to this scratch pool as well, but i want to get extended performance data before changing anything further.

     

    I'm, pretty happy with the reduction in IO from space_cache v2 and 1MiB partitioning.  All XFS would have been "better" for disk longevity, but I really like the extra level of protection from BTRFS RAID.

     

    last 48hrs:

    628862913_Screenshotfrom2020-08-0415-43-22.thumb.png.76f72d46e37d487c819362d9032ac239.png

    • Like 1
  3. 23 minutes ago, limetech said:

    But negligible given absolute amount of data written.

     

    A loopback is always going to incur more overhead because there is the overhead of the file system within the loopback and then there is the overhead of the file system hosting the loopback.  In most cases the benefit of the loopback far outweighs the extra overhead.

    In this case, yes, however I purposely removed some of my higher IO loads from this test to limit the variability of writes so i could have shorter test periods.  This test is purely container appdata, excluded is:

     

    • transcoding
    • download/extract
    • folder caching
    • array backup staging

     

    In @johnnie.black's case, a huge amount of SSD wear can be avoided, which is on the opposite end of the spectrum of my test case.  I still might end up using BTRFS RAID for one or more pool devices, i just wanted to provide a reasonably solid number that other users could to apply to their own loads and decide of themselves if X times less writes it's worth switching to XFS.

     

    Either way it was fun to investigate!

  4. 17 minutes ago, johnnie.black said:

    Based on some quick earlier tests xfs would still write much less, I would estimate at least 5 times less in my case, still I can live with 190GB instead of 30/40GB a day so I can have checksums and snapshots.

    damn that's still pretty significant. 

     

    I'm really torn on this whole issue. I'm super butt-hurt over how much wear I've been putting on cache SSDs over the years and want to limit it as much as possible, but I also would prefer to not to ever restore my pool devices from backup, reconfigure containers, etc.

  5. 5 hours ago, johnnie.black said:

    v6.8 was writing about 3TB a day

    v6.8 with space cache v2 brought it down to a more reasonable 700GB a day

    v6.9-beta25 with the new alignment brought it down even further to 191.87GB in the last 24 hours

    Is the alignment issue something regarding NVMe? I recall something about that, but I only have SATA so I've skimmed over most of it.

     

    *Edit*  Before you answer, I noticed my OG Cache pool is 4K aligned and my new pool devices are 1M aligned so i guess it's for all SSDs?

     

    *edit2* that's a 93% decrease in writes!  I'm still testing with XFS but I'd much rather go back to BTRFS RAID10 or a pair of BTRFS RAID1 for protection, assuming it's not a massive difference from XFS.

  6. 14 hours ago, jcarroll said:

    So, I upgraded to try to set up 2 cache pools, but when I set up my shares with the cache pool I wanted each share to use, it moved EVERYTHING to my cache drives, or tried to.

     

    Is this just a bug, or am I missing something.

     

    That's very interesting.  Say or example I have a share that's 'cache only' and i change which pool device i want it to use, unraid will move the data from one pool device to the other?  That would be highly useful for me in my IO testing.

     

    Quote

    There are lots of ways to configure

    @limetechcan you provide more details please?

  7. 18 hours ago, TexasUnraid said:

    Yeah, I am thinking I will just use the space_cache=v2 and move things over to the cache for now and see what kind of writes I get.

     

    If they are tolerable then I will wait for 6.9 RC. If they are still too high I will consider the beta. The multiple cache pools would be really handy for me as well.

     

    Keep us posted on how things go and if you notice any bugs with the beta 🙂

    Just to give you some additional info based on my friend's use case who had pretty much identical cache load to me on 6.8.3:

     

    2MB/s brfs cache, btrfs docker.img

    650kB/s brfs cache (w/space_cache=v2), btrfs docker.img

    250kB/s xfs cache, btrfs docker.img

     

    So if you're not using or don't need BTRFS RAID, re-formatting your cache disk to XFS makes a huge difference.  That's a change 60TB/yr to 7.5TB/yr.

  8. 38 minutes ago, TexasUnraid said:

    Sure thing, although with all due respect, until a few weeks ago when this was officially acknowledged, this entire topic was nothing but "hacks" to work around the issue. 😉

     

    So if hacks are not allowed to be discussed, what is an official option to fix the issue? I really would love one, I really hate going outside officially supported channels.

     

    Thus far the only official option I have heard is wait for 6.9 which is ? months away for an RC and 6+ months away from official release?

     

    6.9 does sound like the fix, I am just uncomfortable using betas on an active server, I will always question if an issue if due to the beta or something else. An RC is not ideal but I would consider it.

    applying space_cache=v2 to your btrfs mounts makes a significant difference in writes, I got a reduction of about 65%, and you can do it live on whatever version you're currently on.

     

    on another note - I did end up installing the beta, bailed on my raid10 btrfs cache and now have three pool devices:

    cache pool(xfs):

    • regular share caching
    • downloads
    • docker-xfs.img
    • plex transcoding

     

    xfs pool:

    • caching for only /usr/mnt/appdata

     

    btrfs pool(single disk):

    • nothing currently

     

    I'm going to give it another 5 days with appdata on the XFS pool, then move it to the BTRFS pool for a week, then add a second disk to BTRFS pool and run that for a week.  With transcoding removed from the equation it should only be IO from normal container operations, so it should be a pretty fair comparison.

     

    1096488086_Screenshotfrom2020-07-2116-21-35.thumb.png.9e85c3c07dc9012f3db7c67a10375ea6.png

     

  9. I can do some testing of the various scenarios once we get a RC release.  Can't risk a beta, in a prime pandemic plex period.

     

    btfs img on btfs cache

    xfs image on btfs cache

    folder on btfs cache

    btfs img on xfs cache

    xfs image on xfs cache

    folder on xfs cache

     

    Luckily I have a drawer full of the same SSD model so I can setup some different cache pools.

  10. 18 hours ago, limetech said:

    Magic.  (actually it's an improved algorithm for maintaining data structures keeping track of free space)

     

    Thanks to @johnnie.black for pointing out this improvement.

    I guess i have to thank Facebook for developing it, begrudgingly.

     

    @limetechIs this going to become a standard (or option) in 6.9?  While switching to XFS would be nice, I can't afford new NVME disks and am stuck with BTRFS RAID10 for now.

     

    *edit* that said, if I had separate cache devices for array caching, appdata, etc i might not need raid10 anymore esp. with less overhead from btrfs.  something worth testing i guess.

  11. 38 minutes ago, TexasUnraid said:

    One use case I will be using it for off the bat would be having a separate cache for docker and appdata formatted as XFS to prevent the 10x - 100x inflated writes that happen with a BTRFS cache.

    Do you have more info on this?  Currently using BTRFS RAID10 for all my caching, including docker/appdata.

  12. 4 hours ago, limetech said:

    If this set of patches by itself will be useful to a alot of folks we can do it, otherwise I'd say wait until 5.8 kernel is released.

    Is 5.8 planned for another point release or are you guys thinking it could be a drop-in during the 6.9.0RC?  Not sure if .0 is planned to go stable before Aug/Sept.

  13. 37 minutes ago, bonienl said:

    What do you mean, can you post a screenshot?

     

    uni-spaced:

    WDC_WD20EARS-00MVWB0_WD-WCAXXXXX532 - 2 TB (sdf)
    WDC_WD20EARS-00MVWB0_WD-WCAXXXXX085 - 2 TB (sdt)
    WDC_WD30EFRX-68EUZN0_WD-WMCXXXXX882 - 3 TB (sdh)
    WDC_WD30EFRX-68AX9N0_WD-WMCXXXXX308 - 3 TB (sdn)
    WDC_WD30EFRX-68EUZN0_WD-WMCXXXXX640 - 3 TB (sdq)
    WDC_WD30EZRX-00MMMB0_WD-WCAXXXXX178 - 3 TB (sdr)
    WDC_WD30EFRX-68AX9N0_WD-WCCXXXXX428 - 3 TB (sds)
    WDC_WD30EFRX-68AX9N0_WD-WMCXXXXX686 - 3 TB (sdu)
    WDC_WD30EFRX-68AX9N0_WD-WMCXXXXX622 - 3 TB (sdk)
    HGST_HDN724040ALE640_PK1338XXXXXNB - 4 TB (sdg)
    ST4000VN000-1H4168_Z304XXXX - 4 TB (sdm)
    ST4000VN008-2DR166_ZGY0XXXX - 4 TB (sdl)
    HGST_HDN724040ALE640_PKXXXXPCJAZ35S - 4 TB (sdp)
    ST8000VN0022-2EL112_ZA1XXXX5 - 8 TB (sdi)
    ST8000VN0022-2EL112_ZA1XXXXJ - 8 TB (sdj)
    
    VK0800GDJYA_BTWL33XXXXXXXX0RGN - 800 GB (sdc)
    VK0800GDJYA_BTWL33XXXXXXXX0RGN - 800 GB (sdb)	
    VK0800GDJYA_BTWL33XXXXXXXX0RGN - 800 GB (sdd)
    VK0800GDJYA_BTWL33XXXXXXXX0RGN - 800 GB (sde)

    non-uni-spaced:

    non_unispace.png