treecats Posted October 22, 2020 Share Posted October 22, 2020 (edited) Unraid Version: 6.9.0-beta25 I added new cache pool, my current disk settings for file system is xfs. When the disk finished formatting, it's in btrfs format. Is this a bug? and how do I reformat this as xfs? But this doesn't appear to be an issue with my original cache drive. Thanks! zerg-diagnostics-20201022-1006.zip Edited October 22, 2020 by treecats Quote Link to comment
JorgeB Posted October 22, 2020 Share Posted October 22, 2020 10 minutes ago, treecats said: Is this a bug? No, cache defaults to btrfs. 11 minutes ago, treecats said: and how do I reformat this as xfs? Stop array, click on it, change fs to xfs, start array, format. Quote Link to comment
treecats Posted October 22, 2020 Author Share Posted October 22, 2020 10 minutes ago, JorgeB said: No, cache defaults to btrfs. Stop array, click on it, change fs to xfs, start array, format. My current disk setting for FS is xfs. when I formated the SSD, it formated as btrfs. I understand if you are using multi-drive cache pool, then it has to be btrfs. Since I created as separate cache pool. xfs format should be fine. Quote Link to comment
itimpi Posted October 22, 2020 Share Posted October 22, 2020 Just now, treecats said: My current disk setting for FS is xfs. when I formated the SSD, it formated as btrfs. I think you are getting confused between the default format for array drives and that for cache drives/pools. For cache it always defaults to btrfs (and this is the only option if there is more one drive in a pool) and you have to manually change it to get something else. Quote Link to comment
trurl Posted October 22, 2020 Share Posted October 22, 2020 If cache didn't default to btrfs, then people would wonder why they can't add additional disks to the pool, and they would wonder this after they already had data on cache. Quote Link to comment
treecats Posted October 22, 2020 Author Share Posted October 22, 2020 (edited) Ok. got it. Just in case if I add another ssd to the existing pool in the future, then I will need to reformat my "cache" to "btrfs"? lol, this could be painful. btrfs is stable enough compare to xfs these days? Also I recall there was an issue with 6.8.3 with too many cache write. This is the reason I went with 6.9 beta Thanks! Edited October 22, 2020 by treecats Quote Link to comment
trurl Posted October 22, 2020 Share Posted October 22, 2020 3 minutes ago, treecats said: if I add another ssd to the existing pool in the future, then I will need to reformat my "cache" to "btrfs"? Yes, this is exactly what I meant by 8 minutes ago, trurl said: people would wonder why they can't add additional disks to the pool, and they would wonder this after they already had data on cache. 1 Quote Link to comment
jay010101 Posted December 30, 2020 Share Posted December 30, 2020 Sorry to harp on this topic. I'm using 6.8.3 and I'm seeing a lot of writes on my SSD cache drive. Usual setup with some of my shares using the cache (mover) and my docker containers on the cache. I only have three dockers Plex, Unifi and Krusader. I installed iotop and am seeing writes in the 1GB/15min. I'm currently doing a move and am going to stop the cache on my shares and see what that produces. My cache is btrfs should i convert it to xfs, I only have one drive? I could move to the beta if it solves this issue. I'll insert a pic of my iotop output. The move is dominating the chart but its the [loop2] process that is at 2GB in the last 30 min. Any idea what the LOOP2 is? Quote Link to comment
jay010101 Posted December 30, 2020 Share Posted December 30, 2020 EDIT: So [loop2] is somehow related to PLEX docker. If I stop the Plex docker the writes stop. I'm using an added Template Repository (https://github.com/plexinc/pms-docker). Maybe I should try another docker and see if it still has the same issue? Quote Link to comment
tjb_altf4 Posted December 30, 2020 Share Posted December 30, 2020 2 hours ago, jay010101 said: Sorry to harp on this topic. I'm using 6.8.3 and I'm seeing a lot of writes on my SSD cache drive. Known issue, resolved in 6.9 (currently in RC), however some dockers have been known to exacerbate the situation. Lots of good discussion if you have the time to read. You can run this command once the array is up (per reboot) to help address the issue. mount -o remount -o space_cache=v2 /mnt/cache Quote Link to comment
trurl Posted December 30, 2020 Share Posted December 30, 2020 7 hours ago, jay010101 said: EDIT: So [loop2] is somehow related to PLEX docker. If I stop the Plex docker the writes stop. I'm using an added Template Repository (https://github.com/plexinc/pms-docker). Maybe I should try another docker and see if it still has the same issue? loop2 is the docker.img vdisk which all dockers use. 1 Quote Link to comment
jay010101 Posted December 30, 2020 Share Posted December 30, 2020 (edited) 11 hours ago, tjb_altf4 said: Known issue, resolved in 6.9 (currently in RC), however some dockers have been known to exacerbate the situation. Lots of good discussion if you have the time to read. You can run this command once the array is up (per reboot) to help address the issue. mount -o remount -o space_cache=v2 /mnt/cache Would you recommend just moving to the beta? the above command appears to help... i could add it to my [email protected] but seems like a bandaid fix. Edited December 30, 2020 by jay010101 Quote Link to comment
tjb_altf4 Posted December 31, 2020 Share Posted December 31, 2020 4 hours ago, jay010101 said: Would you recommend just moving to the beta? the above command appears to help... i could add it to my [email protected] but seems like a bandaid fix. I made the move when it hit RC, and it's been solid. You can always roll back if you hit any immediate issues. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.