hawihoney Posted October 22, 2020 Share Posted October 22, 2020 Can please somebody point me to a documentation how to avoid massive writes on NVMe M.2 Cache SSDs running on Unraid 6.8.3? After reading the 6.9 Beta notes I checked my cache disks and they are heavily involved. Two 1 TB disks in a BTRFS cache pool show 21 TB of writes in 4 months while holding around 250 GB. Models in use are "Samsung SSD 970 EVO Plus 1TB". - Available spare 100% - Available spare threshold 10% - Percentage used 0% - Data units read 924,128 [473 GB] - Data units written 42,803,014 [21.9 TB] - Host read commands 12,644,896 - Host write commands 505,220,723 - Controller busy time 324 - Power cycles 2 - Power on hours 190 (7d, 22h) Needless to say that I want to avoid looong outages. Many thanks in advance. Quote Link to comment
Helmonder Posted October 22, 2020 Share Posted October 22, 2020 (edited) Does this only pertain BTRFS ? My M2 cache drive (XFS) has written 233TB and read 500TB in 1.7 years... Also seems like a lot for a 1TB drive... If I calculate correctly the whole 1TB drive is written totally every 3 days... And it mostly always more then half full .. Edited October 22, 2020 by Helmonder Quote Link to comment
hawihoney Posted October 22, 2020 Author Share Posted October 22, 2020 (edited) 30 minutes ago, Helmonder said: Does this only pertain BTRFS ? Don't know. I just read the 6.9 beta notes. In one of the past beta releases a complete handling has been introduced to move data off these disks, reformat them and move the data back. Beta releases are no option here, so I'm looking for a way to do something similar on stable 6.8.3. ***EDIT*** Your values look as if you are using the cache disk as a cache disk 500 TB read, 233 TB written look reasonable. I use the cache pool as docker/VM store only. My values are 473 GB read, 22 TB written. This is definitely not reasonable. Edited October 22, 2020 by hawihoney Quote Link to comment
caplam Posted October 22, 2020 Share Posted October 22, 2020 i had that problem too. I had 2 ssd western blue 500Gb in a btrfs pool that died in 6months. I replaced them with a single 860 evo formatted in xfs (no pool possible with xfs in unraid). I upgraded to 6.9. beta 30 and tried again a btrfs pool with 2 brand new western blue 500Gb. And i had the same problem. So i'm back to xfs except that in 6.9 you can create several pools. I now have 2 xfs pool of signle ssd and no more problem. The downside is that you better have a strong backup as pools are unprotected. Now i only have to wait to see if and how zfs will be implemented. Quote Link to comment
JorgeB Posted October 22, 2020 Share Posted October 22, 2020 You can remount the pool with space_cache=v2, it should reduce writes by a lot, but to take advantage of all the mitigations you'd need to update to 6.9 Quote Link to comment
Helmonder Posted October 22, 2020 Share Posted October 22, 2020 1 minute ago, caplam said: i had that problem too. I had 2 ssd western blue 500Gb in a btrfs pool that died in 6months. I replaced them with a single 860 evo formatted in xfs (no pool possible with xfs in unraid). I upgraded to 6.9. beta 30 and tried again a btrfs pool with 2 brand new western blue 500Gb. And i had the same problem. So i'm back to xfs except that in 6.9 you can create several pools. I now have 2 xfs pool of signle ssd and no more problem. The downside is that you better have a strong backup as pools are unprotected. Now i only have to wait to see if and how zfs will be implemented. I run a daily backup with the plugin for docker backup.. actually works fine.. for the rest I use the cache for -cache- and that does not mean I care a lot if I loose something.. I think the cache pools are most valuable for situation where they are not really used for cache... But for "fast storage".. Quote Link to comment
hawihoney Posted October 22, 2020 Author Share Posted October 22, 2020 1 minute ago, JorgeB said: You can remount the pool with space_cache=v2, it should reduce writes by a lot, but to take advantage of all the mitigations you'd need to update to 6.9 Can you please elaborate a little. A small Google search shows this as a BTRFS thing. Reading the 6.9 announcements shows that this is a Samsung, etc./Unraid incompatibility when using different sector alignments (that's what I understand). Quote Link to comment
JorgeB Posted October 22, 2020 Share Posted October 22, 2020 https://forums.unraid.net/bug-reports/stable-releases/683-docker-image-huge-amount-of-unnecessary-writes-on-cache-r733/?do=findComment&comment=9579 Quote Link to comment
JorgeB Posted October 22, 2020 Share Posted October 22, 2020 New alignment will also help, but that needs v6.9*, these were my results: https://forums.unraid.net/bug-reports/stable-releases/683-docker-image-huge-amount-of-unnecessary-writes-on-cache-r733/?do=findComment&comment=10142 * I found out recently that pools (and only pools) with the new alignment still mount correctly in previous versions, but it's unsupported, i.e., use at own risk. Quote Link to comment
hawihoney Posted October 22, 2020 Author Share Posted October 22, 2020 1 hour ago, JorgeB said: https://forums.unraid.net/bug-reports/stable-releases/683-docker-image-huge-amount-of-unnecessary-writes-on-cache-r733/?do=findComment&comment=9579 Did add it to User Scripts (Array Start). Will re-think upon 6.9 stable. Thanks, man. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.