-
Posts
601 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by Dephcon
-
-
@limetech with slackware 15.0/kernel 5.15.x now out, are you looking to bump up this RC or will you be staying on 5.14?
- 1
-
Intel GVT-g looks sick, should help with my migration from Plex to Jellyfin without destroying my CPU.
@limetech I was hoping 6.10 would include multiple arrays, do you have a planned release for this feature?
- 1
-
2 hours ago, nickp85 said:
Shift your plex transcoding to memory by putting it in /tmp! I did this not long ago, create a Ramdisk on boot in /tmp with 4 GB of space and let Plex use it for transcoding. Can put more if you want it. There is a post on the forum about it somewhere. Great to reduce wear and tear on disk.
i used to do it in ram when i had 32GB, when i upgraded i only had 16GB of DDR4 available so its a bit tight now.
-
Just wanted to circle back to this now that my testing is over and I've finalized my caching config (for now).
Previously I was using a 4 SSD BTRFS RAID10 for "cache", using 4K partitioning
Now I have a 2 SSD BTRFS RAID1, 1M partitioned for array cache and docker-xfs.img and a XFS formatted pool device to use for scratch-space. currently this includes plex transcoding and duplicacy cache. I might move my usenet download/extract over to this scratch pool as well, but i want to get extended performance data before changing anything further.
I'm, pretty happy with the reduction in IO from space_cache v2 and 1MiB partitioning. All XFS would have been "better" for disk longevity, but I really like the extra level of protection from BTRFS RAID.
last 48hrs:
- 1
-
23 minutes ago, limetech said:
But negligible given absolute amount of data written.
A loopback is always going to incur more overhead because there is the overhead of the file system within the loopback and then there is the overhead of the file system hosting the loopback. In most cases the benefit of the loopback far outweighs the extra overhead.
In this case, yes, however I purposely removed some of my higher IO loads from this test to limit the variability of writes so i could have shorter test periods. This test is purely container appdata, excluded is:
- transcoding
- download/extract
- folder caching
- array backup staging
In @johnnie.black's case, a huge amount of SSD wear can be avoided, which is on the opposite end of the spectrum of my test case. I still might end up using BTRFS RAID for one or more pool devices, i just wanted to provide a reasonably solid number that other users could to apply to their own loads and decide of themselves if X times less writes it's worth switching to XFS.
Either way it was fun to investigate!
-
Just switched appdata from xfs to a single disk btrfs, it's about 2x the writes:
ignore the "avg" it's btrfs heavy as it starts with all my containers starting back up. if i exclude the container boot-up, until now it's AVG is ~121kB/s and my 2hr average on XFS before the cut-over was 49kB/s. So that's a 2.5x difference.
-
17 minutes ago, johnnie.black said:
Based on some quick earlier tests xfs would still write much less, I would estimate at least 5 times less in my case, still I can live with 190GB instead of 30/40GB a day so I can have checksums and snapshots.
damn that's still pretty significant.
I'm really torn on this whole issue. I'm super butt-hurt over how much wear I've been putting on cache SSDs over the years and want to limit it as much as possible, but I also would prefer to not to ever restore my pool devices from backup, reconfigure containers, etc.
-
5 hours ago, johnnie.black said:
v6.8 was writing about 3TB a day
v6.8 with space cache v2 brought it down to a more reasonable 700GB a day
v6.9-beta25 with the new alignment brought it down even further to 191.87GB in the last 24 hours
Is the alignment issue something regarding NVMe? I recall something about that, but I only have SATA so I've skimmed over most of it.
*Edit* Before you answer, I noticed my OG Cache pool is 4K aligned and my new pool devices are 1M aligned so i guess it's for all SSDs?
*edit2* that's a 93% decrease in writes! I'm still testing with XFS but I'd much rather go back to BTRFS RAID10 or a pair of BTRFS RAID1 for protection, assuming it's not a massive difference from XFS.
-
14 hours ago, jcarroll said:
So, I upgraded to try to set up 2 cache pools, but when I set up my shares with the cache pool I wanted each share to use, it moved EVERYTHING to my cache drives, or tried to.
Is this just a bug, or am I missing something.
That's very interesting. Say or example I have a share that's 'cache only' and i change which pool device i want it to use, unraid will move the data from one pool device to the other? That would be highly useful for me in my IO testing.
QuoteThere are lots of ways to configure
@limetechcan you provide more details please?
-
18 hours ago, TexasUnraid said:
Yeah, I am thinking I will just use the space_cache=v2 and move things over to the cache for now and see what kind of writes I get.
If they are tolerable then I will wait for 6.9 RC. If they are still too high I will consider the beta. The multiple cache pools would be really handy for me as well.
Keep us posted on how things go and if you notice any bugs with the beta 🙂
Just to give you some additional info based on my friend's use case who had pretty much identical cache load to me on 6.8.3:
2MB/s brfs cache, btrfs docker.img
650kB/s brfs cache (w/space_cache=v2), btrfs docker.img
250kB/s xfs cache, btrfs docker.img
So if you're not using or don't need BTRFS RAID, re-formatting your cache disk to XFS makes a huge difference. That's a change 60TB/yr to 7.5TB/yr.
-
Upgraded from 6.8.3 ~6 days ago without any issues so far. Only running containers (br interface), no VMs to report on.
I am using multiple pool devices and it's pretty slick.
- 1
-
38 minutes ago, TexasUnraid said:
Sure thing, although with all due respect, until a few weeks ago when this was officially acknowledged, this entire topic was nothing but "hacks" to work around the issue. 😉
So if hacks are not allowed to be discussed, what is an official option to fix the issue? I really would love one, I really hate going outside officially supported channels.
Thus far the only official option I have heard is wait for 6.9 which is ? months away for an RC and 6+ months away from official release?
6.9 does sound like the fix, I am just uncomfortable using betas on an active server, I will always question if an issue if due to the beta or something else. An RC is not ideal but I would consider it.
applying space_cache=v2 to your btrfs mounts makes a significant difference in writes, I got a reduction of about 65%, and you can do it live on whatever version you're currently on.
on another note - I did end up installing the beta, bailed on my raid10 btrfs cache and now have three pool devices:
cache pool(xfs):
- regular share caching
- downloads
- docker-xfs.img
- plex transcoding
xfs pool:
- caching for only /usr/mnt/appdata
btrfs pool(single disk):
- nothing currently
I'm going to give it another 5 days with appdata on the XFS pool, then move it to the BTRFS pool for a week, then add a second disk to BTRFS pool and run that for a week. With transcoding removed from the equation it should only be IO from normal container operations, so it should be a pretty fair comparison.
-
1 hour ago, limetech said:
blkdisard /dev/sdX # that is, on the raw device
correct. That is default now.
i might have to install beta25 sometime this week as I'm very curious now lol
-
1 minute ago, limetech said:
Thank you, we're doing same testing as well. Other combinations would be
single-device btrfs pool
multiple-device (x2) btrfs pool
Also before each test run, it's best to 'blkdisard' the entire SSD(s) first.
blkdiscard /dev/sdX or blkdiscard /mnt/cache?
can i assume space_cache=v2 is being used for the testing/default in an upcoming release?
-
I can do some testing of the various scenarios once we get a RC release. Can't risk a beta, in a prime pandemic plex period.
btfs img on btfs cache
xfs image on btfs cache
folder on btfs cache
btfs img on xfs cache
xfs image on xfs cache
folder on xfs cache
Luckily I have a drawer full of the same SSD model so I can setup some different cache pools.
-
18 hours ago, limetech said:
Magic. (actually it's an improved algorithm for maintaining data structures keeping track of free space)
Thanks to @johnnie.black for pointing out this improvement.
I guess i have to thank Facebook for developing it, begrudgingly.
@limetechIs this going to become a standard (or option) in 6.9? While switching to XFS would be nice, I can't afford new NVME disks and am stuck with BTRFS RAID10 for now.
*edit* that said, if I had separate cache devices for array caching, appdata, etc i might not need raid10 anymore esp. with less overhead from btrfs. something worth testing i guess.
-
-
38 minutes ago, TexasUnraid said:
One use case I will be using it for off the bat would be having a separate cache for docker and appdata formatted as XFS to prevent the 10x - 100x inflated writes that happen with a BTRFS cache.
Do you have more info on this? Currently using BTRFS RAID10 for all my caching, including docker/appdata.
-
4 hours ago, limetech said:
If this set of patches by itself will be useful to a alot of folks we can do it, otherwise I'd say wait until 5.8 kernel is released.
Is 5.8 planned for another point release or are you guys thinking it could be a drop-in during the 6.9.0RC? Not sure if .0 is planned to go stable before Aug/Sept.
-
Thanks for doing the monospace font for drive identities! Is it possible to include the size and sdX identifier as well so they also lineup nicely? Essentially, the whole Identification cell should be monospaced.
Thanks!
-
1 minute ago, rctneil said:
I too, wondered what you meant by uni-spaced but I see now! I would use the term "mono spaced" myself though. And yes, totally agree!
AH that's the term i was looking for!!!
-
37 minutes ago, bonienl said:
What do you mean, can you post a screenshot?
uni-spaced:
WDC_WD20EARS-00MVWB0_WD-WCAXXXXX532 - 2 TB (sdf) WDC_WD20EARS-00MVWB0_WD-WCAXXXXX085 - 2 TB (sdt) WDC_WD30EFRX-68EUZN0_WD-WMCXXXXX882 - 3 TB (sdh) WDC_WD30EFRX-68AX9N0_WD-WMCXXXXX308 - 3 TB (sdn) WDC_WD30EFRX-68EUZN0_WD-WMCXXXXX640 - 3 TB (sdq) WDC_WD30EZRX-00MMMB0_WD-WCAXXXXX178 - 3 TB (sdr) WDC_WD30EFRX-68AX9N0_WD-WCCXXXXX428 - 3 TB (sds) WDC_WD30EFRX-68AX9N0_WD-WMCXXXXX686 - 3 TB (sdu) WDC_WD30EFRX-68AX9N0_WD-WMCXXXXX622 - 3 TB (sdk) HGST_HDN724040ALE640_PK1338XXXXXNB - 4 TB (sdg) ST4000VN000-1H4168_Z304XXXX - 4 TB (sdm) ST4000VN008-2DR166_ZGY0XXXX - 4 TB (sdl) HGST_HDN724040ALE640_PKXXXXPCJAZ35S - 4 TB (sdp) ST8000VN0022-2EL112_ZA1XXXX5 - 8 TB (sdi) ST8000VN0022-2EL112_ZA1XXXXJ - 8 TB (sdj) VK0800GDJYA_BTWL33XXXXXXXX0RGN - 800 GB (sdc) VK0800GDJYA_BTWL33XXXXXXXX0RGN - 800 GB (sdb) VK0800GDJYA_BTWL33XXXXXXXX0RGN - 800 GB (sdd) VK0800GDJYA_BTWL33XXXXXXXX0RGN - 800 GB (sde)
non-uni-spaced:
-
love all the work going into the UI!
I do have a suggestion for that: Unisapce font for either all the tables/everything or at least the disk identification column. It burn my eyes when all the disk idents don't line up.
Thanks!
-
@limetech you guys couldn't wait another day for our parity checks to finish? 😛
- 1
Unraid OS version 6.10.0-rc2 available
-
-
-
-
-
in Prereleases
Posted
Did you clear your browser cache?