Jump to content

Dephcon

Members
  • Content Count

    578
  • Joined

  • Last visited

  • Days Won

    1

Dephcon last won the day on June 28 2018

Dephcon had the most liked content!

Community Reputation

12 Good

1 Follower

About Dephcon

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. i used to do it in ram when i had 32GB, when i upgraded i only had 16GB of DDR4 available so its a bit tight now.
  2. Just wanted to circle back to this now that my testing is over and I've finalized my caching config (for now). Previously I was using a 4 SSD BTRFS RAID10 for "cache", using 4K partitioning Now I have a 2 SSD BTRFS RAID1, 1M partitioned for array cache and docker-xfs.img and a XFS formatted pool device to use for scratch-space. currently this includes plex transcoding and duplicacy cache. I might move my usenet download/extract over to this scratch pool as well, but i want to get extended performance data before changing anything further. I'm, pretty happy with the reduction in IO from space_cache v2 and 1MiB partitioning. All XFS would have been "better" for disk longevity, but I really like the extra level of protection from BTRFS RAID. last 48hrs:
  3. great because using having appdata and appdata_backup was really cramping my auto-complete game in the cli
  4. I'm also curious about this. I'd prefer the store my appdata backups at /mnt/user/backup/appdata_backup instead of a dedicated share
  5. In this case, yes, however I purposely removed some of my higher IO loads from this test to limit the variability of writes so i could have shorter test periods. This test is purely container appdata, excluded is: transcoding download/extract folder caching array backup staging In @johnnie.black's case, a huge amount of SSD wear can be avoided, which is on the opposite end of the spectrum of my test case. I still might end up using BTRFS RAID for one or more pool devices, i just wanted to provide a reasonably solid number that other users could to apply to their own loads and decide of themselves if X times less writes it's worth switching to XFS. Either way it was fun to investigate!
  6. Just switched appdata from xfs to a single disk btrfs, it's about 2x the writes: ignore the "avg" it's btrfs heavy as it starts with all my containers starting back up. if i exclude the container boot-up, until now it's AVG is ~121kB/s and my 2hr average on XFS before the cut-over was 49kB/s. So that's a 2.5x difference.
  7. damn that's still pretty significant. I'm really torn on this whole issue. I'm super butt-hurt over how much wear I've been putting on cache SSDs over the years and want to limit it as much as possible, but I also would prefer to not to ever restore my pool devices from backup, reconfigure containers, etc.
  8. Is the alignment issue something regarding NVMe? I recall something about that, but I only have SATA so I've skimmed over most of it. *Edit* Before you answer, I noticed my OG Cache pool is 4K aligned and my new pool devices are 1M aligned so i guess it's for all SSDs? *edit2* that's a 93% decrease in writes! I'm still testing with XFS but I'd much rather go back to BTRFS RAID10 or a pair of BTRFS RAID1 for protection, assuming it's not a massive difference from XFS.
  9. That's very interesting. Say or example I have a share that's 'cache only' and i change which pool device i want it to use, unraid will move the data from one pool device to the other? That would be highly useful for me in my IO testing. @limetechcan you provide more details please?
  10. Just to give you some additional info based on my friend's use case who had pretty much identical cache load to me on 6.8.3: 2MB/s brfs cache, btrfs docker.img 650kB/s brfs cache (w/space_cache=v2), btrfs docker.img 250kB/s xfs cache, btrfs docker.img So if you're not using or don't need BTRFS RAID, re-formatting your cache disk to XFS makes a huge difference. That's a change 60TB/yr to 7.5TB/yr.
  11. Upgraded from 6.8.3 ~6 days ago without any issues so far. Only running containers (br interface), no VMs to report on. I am using multiple pool devices and it's pretty slick.
  12. applying space_cache=v2 to your btrfs mounts makes a significant difference in writes, I got a reduction of about 65%, and you can do it live on whatever version you're currently on. on another note - I did end up installing the beta, bailed on my raid10 btrfs cache and now have three pool devices: cache pool(xfs): regular share caching downloads docker-xfs.img plex transcoding xfs pool: caching for only /usr/mnt/appdata btrfs pool(single disk): nothing currently I'm going to give it another 5 days with appdata on the XFS pool, then move it to the BTRFS pool for a week, then add a second disk to BTRFS pool and run that for a week. With transcoding removed from the equation it should only be IO from normal container operations, so it should be a pretty fair comparison.
  13. anyone had any lucking using inputs.bond? i get this in my log: 2020-07-15T18:18:00Z E! [inputs.bond] Error in plugin: error inspecting '/rootfs/proc/net/bonding/bond0' interface: open /rootfs/proc/net/bonding/bond0: no such file or directory then i set the path in conf with 'host_proc = "/proc"' and then got this: 2020-07-15T18:19:00Z E! [inputs.bond] Error in plugin: error inspecting '/proc/net/bonding/bond0' interface: open /proc/net/bonding/bond0: no such file or directory
  14. i might have to install beta25 sometime this week as I'm very curious now lol
  15. blkdiscard /dev/sdX or blkdiscard /mnt/cache? can i assume space_cache=v2 is being used for the testing/default in an upcoming release?