-
Posts
601 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by Dephcon
-
great because using having appdata and appdata_backup was really cramping my auto-complete game in the cli
-
I'm also curious about this. I'd prefer the store my appdata backups at /mnt/user/backup/appdata_backup instead of a dedicated share
-
[6.8.3] docker image huge amount of unnecessary writes on cache
Dephcon commented on S1dney's report in Stable Releases
In this case, yes, however I purposely removed some of my higher IO loads from this test to limit the variability of writes so i could have shorter test periods. This test is purely container appdata, excluded is: transcoding download/extract folder caching array backup staging In @johnnie.black's case, a huge amount of SSD wear can be avoided, which is on the opposite end of the spectrum of my test case. I still might end up using BTRFS RAID for one or more pool devices, i just wanted to provide a reasonably solid number that other users could to apply to their own loads and decide of themselves if X times less writes it's worth switching to XFS. Either way it was fun to investigate! -
[6.8.3] docker image huge amount of unnecessary writes on cache
Dephcon commented on S1dney's report in Stable Releases
Just switched appdata from xfs to a single disk btrfs, it's about 2x the writes: ignore the "avg" it's btrfs heavy as it starts with all my containers starting back up. if i exclude the container boot-up, until now it's AVG is ~121kB/s and my 2hr average on XFS before the cut-over was 49kB/s. So that's a 2.5x difference. -
[6.8.3] docker image huge amount of unnecessary writes on cache
Dephcon commented on S1dney's report in Stable Releases
damn that's still pretty significant. I'm really torn on this whole issue. I'm super butt-hurt over how much wear I've been putting on cache SSDs over the years and want to limit it as much as possible, but I also would prefer to not to ever restore my pool devices from backup, reconfigure containers, etc. -
[6.8.3] docker image huge amount of unnecessary writes on cache
Dephcon commented on S1dney's report in Stable Releases
Is the alignment issue something regarding NVMe? I recall something about that, but I only have SATA so I've skimmed over most of it. *Edit* Before you answer, I noticed my OG Cache pool is 4K aligned and my new pool devices are 1M aligned so i guess it's for all SSDs? *edit2* that's a 93% decrease in writes! I'm still testing with XFS but I'd much rather go back to BTRFS RAID10 or a pair of BTRFS RAID1 for protection, assuming it's not a massive difference from XFS. -
That's very interesting. Say or example I have a share that's 'cache only' and i change which pool device i want it to use, unraid will move the data from one pool device to the other? That would be highly useful for me in my IO testing. @limetechcan you provide more details please?
-
[6.8.3] docker image huge amount of unnecessary writes on cache
Dephcon commented on S1dney's report in Stable Releases
Just to give you some additional info based on my friend's use case who had pretty much identical cache load to me on 6.8.3: 2MB/s brfs cache, btrfs docker.img 650kB/s brfs cache (w/space_cache=v2), btrfs docker.img 250kB/s xfs cache, btrfs docker.img So if you're not using or don't need BTRFS RAID, re-formatting your cache disk to XFS makes a huge difference. That's a change 60TB/yr to 7.5TB/yr. -
Upgraded from 6.8.3 ~6 days ago without any issues so far. Only running containers (br interface), no VMs to report on. I am using multiple pool devices and it's pretty slick.
-
[6.8.3] docker image huge amount of unnecessary writes on cache
Dephcon commented on S1dney's report in Stable Releases
applying space_cache=v2 to your btrfs mounts makes a significant difference in writes, I got a reduction of about 65%, and you can do it live on whatever version you're currently on. on another note - I did end up installing the beta, bailed on my raid10 btrfs cache and now have three pool devices: cache pool(xfs): regular share caching downloads docker-xfs.img plex transcoding xfs pool: caching for only /usr/mnt/appdata btrfs pool(single disk): nothing currently I'm going to give it another 5 days with appdata on the XFS pool, then move it to the BTRFS pool for a week, then add a second disk to BTRFS pool and run that for a week. With transcoding removed from the equation it should only be IO from normal container operations, so it should be a pretty fair comparison. -
anyone had any lucking using inputs.bond? i get this in my log: 2020-07-15T18:18:00Z E! [inputs.bond] Error in plugin: error inspecting '/rootfs/proc/net/bonding/bond0' interface: open /rootfs/proc/net/bonding/bond0: no such file or directory then i set the path in conf with 'host_proc = "/proc"' and then got this: 2020-07-15T18:19:00Z E! [inputs.bond] Error in plugin: error inspecting '/proc/net/bonding/bond0' interface: open /proc/net/bonding/bond0: no such file or directory
-
[6.8.3] docker image huge amount of unnecessary writes on cache
Dephcon commented on S1dney's report in Stable Releases
i might have to install beta25 sometime this week as I'm very curious now lol -
[6.8.3] docker image huge amount of unnecessary writes on cache
Dephcon commented on S1dney's report in Stable Releases
blkdiscard /dev/sdX or blkdiscard /mnt/cache? can i assume space_cache=v2 is being used for the testing/default in an upcoming release? -
[6.8.3] docker image huge amount of unnecessary writes on cache
Dephcon commented on S1dney's report in Stable Releases
I can do some testing of the various scenarios once we get a RC release. Can't risk a beta, in a prime pandemic plex period. btfs img on btfs cache xfs image on btfs cache folder on btfs cache btfs img on xfs cache xfs image on xfs cache folder on xfs cache Luckily I have a drawer full of the same SSD model so I can setup some different cache pools. -
[6.8.3] docker image huge amount of unnecessary writes on cache
Dephcon commented on S1dney's report in Stable Releases
I guess i have to thank Facebook for developing it, begrudgingly. @limetechIs this going to become a standard (or option) in 6.9? While switching to XFS would be nice, I can't afford new NVME disks and am stuck with BTRFS RAID10 for now. *edit* that said, if I had separate cache devices for array caching, appdata, etc i might not need raid10 anymore esp. with less overhead from btrfs. something worth testing i guess. -
[6.8.3] docker image huge amount of unnecessary writes on cache
Dephcon commented on S1dney's report in Stable Releases
-
unRAID 6 NerdPack - CLI tools (iftop, iotop, screen, kbd, etc.)
Dephcon replied to jonp's topic in Plugin Support
Ok. Try now. The name can't have an underscore in it. awesome, it works! thank you. -
unRAID 6 NerdPack - CLI tools (iftop, iotop, screen, kbd, etc.)
Dephcon replied to jonp's topic in Plugin Support
6.8.3 and the plugin is up to date (i update nightly) *edit* the plugin version says 2019.12.31, is that correct? -
unRAID 6 NerdPack - CLI tools (iftop, iotop, screen, kbd, etc.)
Dephcon replied to jonp's topic in Plugin Support
am i missing something? i'm not seeing this in the nerdpack list of tools on my system -
Do you have more info on this? Currently using BTRFS RAID10 for all my caching, including docker/appdata.
-
Is 5.8 planned for another point release or are you guys thinking it could be a drop-in during the 6.9.0RC? Not sure if .0 is planned to go stable before Aug/Sept.
-
unRAID 6 NerdPack - CLI tools (iftop, iotop, screen, kbd, etc.)
Dephcon replied to jonp's topic in Plugin Support
any chance of adding sg3-utils? This tool is useful for pushing firmware to SAS expanaders http://sg.danny.cz/sg/sg3_utils.html -
[Support] Linuxserver.io - Unifi-Controller
Dephcon replied to linuxserver.io's topic in Docker Containers
Huge thanks to CHBMB, this in a nice upgrade to the container. No issues migrating from unifi:unstable > unifi-controller:5.9 > unifi-controller:latest -
Thanks for doing the monospace font for drive identities! Is it possible to include the size and sdX identifier as well so they also lineup nicely? Essentially, the whole Identification cell should be monospaced. Thanks!
-
AH that's the term i was looking for!!!