Alright. I put an Samsung 850 EVO mSATA 250GB as a cache drive in my unRAID server a couple of weeks ago as I got it dirt cheap. A few days ago I stumbled upon the SMART data and noticed that it already had passed over 3TBW. I'm using the cache for downloads that then moves over to the array, and then I always keep appdata and system on the cache as well to keep the array drives from spinning up all the time.
I made a simple script that extracted the SMART TBW value every night and it's increasing with roughly 50-60GB every day, and the latest week the activity has been close to zero (i.e no downloads or other stuff put to the array), so that leaves me with appdata or system.
Tried digging deeper to find what causes so much writes when nothing is happening with iotop.
This is a snippet after running iotop for ~20 minutes, and during this time there was no activity in unRAID, except the docker containers running as usual.
DISK READ : 0.00 B/s | Total DISK WRITE : 49.99 K/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 178.53 K/s
PID PRIO USER DISK READ DISK WRITE> SWAPIN IO COMMAND
7144 be/4 root 84.00 K 552.05 M 0.00 % 0.01 % shfs /mnt/user -disks 31 2048000000 ~big_writes,allow_other -o remember=0
7427 be/0 root 0.00 B 264.48 M 0.00 % 0.03 % [loop2]
7450 be/4 root 0.00 B 52.22 M 0.00 % 0.05 % [btrfs-transacti]
16673 be/4 root 0.00 B 8.20 M 0.00 % 0.00 % [kworker/u32:15-bond0]
3409 be/4 root 0.00 B 7.48 M 0.00 % 0.00 % [kworker/u32:0-btrfs-worker]
Could someone help me out here what I'm looking at? As I've understood it loop2 is the docker image? But what is shfs doing? Is it the combined writes made in my various containers?
These are the containers currently running:
bazaar
letsencrypt
mariadb
mediawiki
nzbget
organizr
PlexMediaServer
radarr
sonarr
speedtest
tautulli
transmission
unifi
I've tried stoppingthem one by one and let iotop run for a while to see if there was a significant difference in writes, but there wasn't really any container that stood out. Stopping them all did make a big difference though, but that's not a good solution
So, is 50-60GB writes a day considered normal, or is it something that's off? Any tips or tricks to tweak any of the containers?
I know you might think this is a non-problem, the 850 EVO 250GB should handle 75TBW (but probably a bit more according to SSD enducance test reviews), but I really think it's odd that a bunch of low activity containers could write this amount of data every day...it can't just be logs right?