Dephcon

Members
  • Posts

    585
  • Joined

  • Last visited

  • Days Won

    1

Dephcon last won the day on June 28 2018

Dephcon had the most liked content!

1 Follower

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Dephcon's Achievements

Enthusiast

Enthusiast (6/14)

15

Reputation

  1. I just saw this as well, they specifically mentioned unraid but I'm not sure they actually tested it. @limetech could you comment on support for these types of disk? Is it just a matter of whether it was a GUID or not?
  2. oh snap! for some reason I assume d/dev/dri was an exclusive-mode type thing. thanks!
  3. how do the virtual GPUs work for multiple containers? currently I have the baremetal iGPU passed though to my plex container with: --device=/dev/dri I'd like to attach a virtual GPU to both plex and jellyfin containers.
  4. Did you clear your browser cache?
  5. @limetech with slackware 15.0/kernel 5.15.x now out, are you looking to bump up this RC or will you be staying on 5.14?
  6. unfortunately, the amount of counterfeiting at amazon on sandisk/samsung SD cards and now apparently USB Keys is brutal. typically the most popular brands are targeted due to higher volume. For those curious, at amazon warehouses they store their own stock and stock from 3rd party resellers in the same bins, so the counterfeit items with the same SKU dilute amazon's legit stock.
  7. Intel GVT-g looks sick, should help with my migration from Plex to Jellyfin without destroying my CPU. @limetech I was hoping 6.10 would include multiple arrays, do you have a planned release for this feature?
  8. i used to do it in ram when i had 32GB, when i upgraded i only had 16GB of DDR4 available so its a bit tight now.
  9. Just wanted to circle back to this now that my testing is over and I've finalized my caching config (for now). Previously I was using a 4 SSD BTRFS RAID10 for "cache", using 4K partitioning Now I have a 2 SSD BTRFS RAID1, 1M partitioned for array cache and docker-xfs.img and a XFS formatted pool device to use for scratch-space. currently this includes plex transcoding and duplicacy cache. I might move my usenet download/extract over to this scratch pool as well, but i want to get extended performance data before changing anything further. I'm, pretty happy with the reduction in IO from space_cache v2 and 1MiB partitioning. All XFS would have been "better" for disk longevity, but I really like the extra level of protection from BTRFS RAID. last 48hrs:
  10. great because using having appdata and appdata_backup was really cramping my auto-complete game in the cli
  11. I'm also curious about this. I'd prefer the store my appdata backups at /mnt/user/backup/appdata_backup instead of a dedicated share
  12. In this case, yes, however I purposely removed some of my higher IO loads from this test to limit the variability of writes so i could have shorter test periods. This test is purely container appdata, excluded is: transcoding download/extract folder caching array backup staging In @johnnie.black's case, a huge amount of SSD wear can be avoided, which is on the opposite end of the spectrum of my test case. I still might end up using BTRFS RAID for one or more pool devices, i just wanted to provide a reasonably solid number that other users could to apply to their own loads and decide of themselves if X times less writes it's worth switching to XFS. Either way it was fun to investigate!
  13. Just switched appdata from xfs to a single disk btrfs, it's about 2x the writes: ignore the "avg" it's btrfs heavy as it starts with all my containers starting back up. if i exclude the container boot-up, until now it's AVG is ~121kB/s and my 2hr average on XFS before the cut-over was 49kB/s. So that's a 2.5x difference.
  14. damn that's still pretty significant. I'm really torn on this whole issue. I'm super butt-hurt over how much wear I've been putting on cache SSDs over the years and want to limit it as much as possible, but I also would prefer to not to ever restore my pool devices from backup, reconfigure containers, etc.
  15. Is the alignment issue something regarding NVMe? I recall something about that, but I only have SATA so I've skimmed over most of it. *Edit* Before you answer, I noticed my OG Cache pool is 4K aligned and my new pool devices are 1M aligned so i guess it's for all SSDs? *edit2* that's a 93% decrease in writes! I'm still testing with XFS but I'd much rather go back to BTRFS RAID10 or a pair of BTRFS RAID1 for protection, assuming it's not a massive difference from XFS.