How work around massive cache writes on 6.8.3


Recommended Posts

Can please somebody point me to a documentation how to avoid massive writes on NVMe M.2 Cache SSDs running on Unraid 6.8.3?

 

After reading the 6.9 Beta notes I checked my cache disks and they are heavily involved. Two 1 TB disks in a BTRFS cache pool show 21 TB of writes in 4 months while holding around 250 GB. Models in use are "Samsung SSD 970 EVO Plus 1TB".

 

-	Available spare	100%
-	Available spare threshold	10%
-	Percentage used	0%
-	Data units read	924,128 [473 GB]
-	Data units written	42,803,014 [21.9 TB]
-	Host read commands	12,644,896
-	Host write commands	505,220,723
-	Controller busy time	324
-	Power cycles	2
-	Power on hours	190 (7d, 22h)

 

Needless to say that I want to avoid looong outages.

 

Many thanks in advance.

 

Link to comment

Does this only pertain BTRFS ?

 

My M2 cache drive (XFS) has written 233TB and read 500TB in 1.7 years... Also seems like a lot for a 1TB drive... If I calculate correctly the whole 1TB drive is written totally every 3 days... And it mostly always more then half full ..

Edited by Helmonder
Link to comment
30 minutes ago, Helmonder said:

Does this only pertain BTRFS ?

Don't know.

 

I just read the 6.9 beta notes. In one of the past beta releases a complete handling has been introduced to move data off these disks, reformat them and move the data back.

 

Beta releases are no option here, so I'm looking for a way to do something similar on stable 6.8.3.

 

***EDIT*** Your values look as if you are using the cache disk as a cache disk ;-) 500 TB read, 233 TB written look reasonable. I use the cache pool as docker/VM store only. My values are 473 GB read, 22 TB written. This is definitely not reasonable.

 

Edited by hawihoney
Link to comment

i had that problem too. I had 2 ssd western blue 500Gb in a btrfs pool that died in 6months.

I replaced them with a single 860 evo formatted in xfs (no pool possible with xfs in unraid).

 I upgraded to 6.9. beta 30 and tried again a btrfs pool with 2 brand new western blue 500Gb. And i had the same problem.

So i'm back to xfs except that in 6.9 you can create several pools. I now have 2 xfs pool of signle ssd and no more problem. 

The downside is that you better have a strong backup as pools are unprotected. 

Now i only have to wait to see if and how zfs will be implemented. 

Link to comment
1 minute ago, caplam said:

i had that problem too. I had 2 ssd western blue 500Gb in a btrfs pool that died in 6months.

I replaced them with a single 860 evo formatted in xfs (no pool possible with xfs in unraid).

 I upgraded to 6.9. beta 30 and tried again a btrfs pool with 2 brand new western blue 500Gb. And i had the same problem.

So i'm back to xfs except that in 6.9 you can create several pools. I now have 2 xfs pool of signle ssd and no more problem. 

The downside is that you better have a strong backup as pools are unprotected. 

Now i only have to wait to see if and how zfs will be implemented. 

I run a daily backup with the plugin for docker backup.. actually works fine.. for the rest I use the cache for -cache- and that does not mean I care a lot if I loose something.. 

 

I think the cache pools are most valuable for situation where they are not really used for cache... But for "fast storage"..

Link to comment
1 minute ago, JorgeB said:

You can remount the pool with space_cache=v2, it should reduce writes by a lot, but to take advantage of all the mitigations you'd need to update to 6.9

Can you please elaborate a little. A small Google search shows this as a BTRFS thing. Reading the 6.9 announcements shows that this is a Samsung, etc./Unraid incompatibility when using different sector alignments (that's what I understand).

 

Link to comment

New alignment will also help, but that needs v6.9*, these were my results:

https://forums.unraid.net/bug-reports/stable-releases/683-docker-image-huge-amount-of-unnecessary-writes-on-cache-r733/?do=findComment&comment=10142

 

* I found out recently that pools (and only pools) with the new alignment still mount correctly in previous versions, but it's unsupported, i.e., use at own risk.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.