Guide on how to stop excessive writes destroying your cache SSD


Recommended Posts

Hi, big thanks to OP for figuring all these fixes out. I have a docker app that writes to a redis db (some 50 MB large) every few seconds and your ramdisk trick really helped taming the cache write. I just want to add a note that, if you do go ahead with the 'appramdisk' route, the container's appdata path also has to be modified to use that appramdisk on top of all the scripts that you created. It took me an embarrassingly long time to figure out why the cache was still constantly having ~10 MB/s written to despite the appramdisk script up and running. Hope that can help someone else!

 

image.png.869fca328eea1aafd251a7d065852a7c.png

Link to comment
3 hours ago, Kagoromo said:

Hi, big thanks to OP for figuring all these fixes out. I have a docker app that writes to a redis db (some 50 MB large) every few seconds and your ramdisk trick really helped taming the cache write. I just want to add a note that, if you do go ahead with the 'appramdisk' route, the container's appdata path also has to be modified to use that appramdisk on top of all the scripts that you created. It took me an embarrassingly long time to figure out why the cache was still constantly having ~10 MB/s written to despite the appramdisk script up and running. Hope that can help someone else!

 

image.png.869fca328eea1aafd251a7d065852a7c.png

 

I am embarrassed that I didn't include this in the OP lol.

 

Don't feel too bad, I setup a new docker the other day and forgot to do this as well and took me longer then it should have to figure out the issue.

  • Like 1
Link to comment
  • 1 month later...
On 8/28/2021 at 12:41 PM, mgutt said:

PS Instead of this:

--mount type=tmpfs,destination=/tmp,tmpfs-mode=1777,tmpfs-size=256000000

 

I prefer creating a new path and link it to Unraid's /tmp, which is already a RAM-Disk:

image.png.996701576dbd4c240547baec38796f95.png

 

Another example:

image.png.fad9de6e0b7cb4fcd2617e5b2ae31065.png

 

The tmp size is already limited to 50% of the RAM and finally no applicaton should write so much temporary data. My total size in Unraid is 57MB:

du -sh /tmp
57M     /tmp

 

 

 

Hello,
I was looking into this to have the tmp folders of some of my dockers reside in RAM as well and it seemed a straightforward way to set up like this.
Initially it also worked but I have had reasons to reboot my machine and then suddenly my nextcloud docker refused to start up as it was not able to write to its mapped tmp folder, the perms on the nextcloud folder in temp were set up for the user root. After some digging I also found this:

Seems to not be encouraged to use the /tmp folder for something like this.

Link to comment
19 hours ago, Crovaxon said:

the perms on the nextcloud folder in temp were set up for the user root.

 

Yes, I have this problem, too. I think this is a bug of docker or unRAID as creating the dir through the docker while installing it, works as expected, but not if the dirs are missing and the container is only started.

Link to comment

I have switched to the extra parameters Variant of creating a ramdisk for my chatty and active containers, that works just as well for now.
Finally understanding better how to take advantage of my added cache drive and how to help it not be bombarded as well, I finally managed to get my array disks to stay spun down. Now to keep a good eye on what is interacting how much with my cache drive on a daily basis and I'll hope to have my drive wear under good control soon. :)

Link to comment
  • 5 months later...

I want to commit my two cents to this thread. Thanks everyone for their commitement to this thread.

 

I have now come up with my own version of a solution based on OP's and mgutt's solutions.

 

My Solution is also 3-step based but more streamlined in my opinion.

 

Step 1: Get rid of all uneccecary log and temp writes in appdata. I searched for every file- or folderchange containing "log","temp" or "cache" in appdata using inotify.

 

inotifywait -e create,modify,attrib,moved_from,moved_to --timefmt %c --format '%T %_e %w %f' -mr /mnt/user/appdata --includei="(cache)|(te?mp)|(logs?)"

 

Then you can mount this to tmpfs in bad behaving containers. Easiest way is to add it as "--tmpfs <path>" into Extra Parameters. mgutts command works as well.

image.png.64358ed8ce0518fd606b193aeadf13f8.png

 

On 8/17/2021 at 4:38 PM, mgutt said:

PS I'm using the same trick for Plex and changed the transcode path to /tmp:

image.png.930c27e84dba347999832db5314f5c9e.png

 

Step 2: Move /var/lib/docker to ramdisk and create ramdisk in appdatata for regularly written files.

 

On 8/17/2021 at 4:38 PM, mgutt said:

 

Thanks mgutt for the hard part of the script in the go-file. I allowed myself to extend it a bit to allow a ramdisk on /mnt/user/appdata/appramdisk/ram. This is basically part 3 of OP's post but included in mgutt's solution. Add the script, create the folders in appdata and reboot.

Spoiler

create the neccecary directories:

mkdir /mnt/cache/appdata/appramdisk
mkdir /mnt/cache/appdata/appramdisk/ram
mkdir /mnt/cache/appdata/appramdisk/disk
chmod 777 /mnt/cache/appdata/appramdisk
chmod 777 /mnt/cache/appdata/appramdisk/ram
chmod 777 /mnt/cache/appdata/appramdisk/disk

 

modified script for go-file:

# -------------------------------------------------
# RAM-Disk for Docker json/log files
# -------------------------------------------------
# create RAM-Disk on starting the docker service
sed -i '/^  echo "starting \$BASE ..."$/i \
  # move json/logs to ram disk\
  rsync -aH --delete /var/lib/docker/containers/ ${DOCKER_APP_CONFIG_PATH%/}/containers_backup\
  mount -t tmpfs tmpfs /var/lib/docker/containers\
  rsync -aH --delete ${DOCKER_APP_CONFIG_PATH%/}/containers_backup/ /var/lib/docker/containers\
  mount -t tmpfs tmpfs /mnt/user/appdata/appramdisk/ram\
  rsync -aH --delete /mnt/user/appdata/appramdisk/disk/ /mnt/user/appdata/appramdisk/ram\
  logger -t docker RAM-Disk created' /etc/rc.d/rc.docker
# remove RAM-Disk on stopping the docker service
sed -i '/^  # tear down the bridge$/i \
  # backup json/logs and remove RAM-Disk\
  rsync -aH --delete /var/lib/docker/containers/ ${DOCKER_APP_CONFIG_PATH%/}/containers_backup\
  umount /var/lib/docker/containers\
  rsync -aH --delete ${DOCKER_APP_CONFIG_PATH%/}/containers_backup/ /var/lib/docker/containers\
  rsync -aH --delete /mnt/user/appdata/appramdisk/ram/ /mnt/user/appdata/appramdisk/disk\
  umount /mnt/user/appdata/appramdisk/ram\
  logger -t docker RAM-Disk removed' /etc/rc.d/rc.docker
# Automatically backup Docker RAM-Disk
sed -i '/^<?PHP$/a \
$sync_interval_minutes=30;\
if ( ! ((date('i') * date('H') * 60 + date('i')) % $sync_interval_minutes) && file_exists("/var/lib/docker/containers")) {\
  exec("mkdir /var/lib/docker_bind");\
  exec("mount --bind /var/lib/docker /var/lib/docker_bind");\
  exec("rsync -aH --delete /var/lib/docker/containers/ /var/lib/docker_bind/containers");\
  exec("umount /var/lib/docker_bind");\
  exec("rmdir /var/lib/docker_bind");\
  exec("rsync -aH --delete /mnt/user/appdata/appramdisk/ram/ /mnt/user/appdata/appramdisk/disk");\
  exec("logger -t docker RAM-Disk synced");\
}' /usr/local/emhttp/plugins/dynamix/scripts/monitor

 

 

Don't forget to add an exclusion for appramdisk/ram in the "Backup/Restore Appdata"-Plugin, otherwise the backup could fail ( I need to test this more, still better safe than sorry).

image.png.1334234df5ec3f482b55bc87200b23f5.png

 

Step 3: rsync desired folders/files to appdata/ram and change docker templates accordingly.

 

Using inotify once again in find candidates for changing the mount to ramdisk:

inotifywait -e create,modify,attrib,moved_from,moved_to --timefmt %c --format '%T %_e %w %f' -mr /mnt/user/appdata --exclude appramdisk

 

example for radarr:

rsync -avH /mnt/user/appdata/radarr /mnt/user/appdata/appramdisk/ram/ --exclude MediaCover --exclude Backup

image.png.e945b8a16076d42267c8a9b609da70a5.png

 

During the whole step 3 you should always consider less is more as you don't want to waste your precious limited ram. hence why I exclude MediaCover in the example above as this holds about 1GB of data for me. The rest of radarr-appdata is a mere 36MB. Also, If only 1 file is accessed often, try to change the path of this file into a seperate folder and only sync this folder to ram. I did this for home-assistant_v2.db for example.

 

I'm sure I will come up with some updates in the future and will post this accordingly.

 

Cheers.

 

Update concerning Plex / or whitespaces in paths:

I experienced some issues with Plex. I suspect it's due to whitespaces in the paths. I rsynced the folder to ramdisk, deleted it and created corresponding symbolic links to /Logs and /Databases in the corresponding paths and then created the paths in my appramdisk/ram-folder which resolved the issue completely.

ln -s /Logs /mnt/user/appdata/PlexMediaServer/Library/Application\ Support/Plex\ Media\ Server/Logs/

image.png.d94a2545df2b94e806c1672538b7f4db.png

image.png.88d2f0ca5c930a18830df4a521270b60.png

 

Edited by madejackson
  • Thanks 1
Link to comment
On 5/25/2022 at 11:44 AM, madejackson said:

I searched for every file or folder containing "log","temp" or "cache" in appdata using inotify.

You are searching in appdata. 99% of all container templates do not mount such dirs to appdata.

 

The above example mounts the /tmp dir inside of the plex container to the containers RAM.

Link to comment
1 hour ago, mgutt said:

You are searching in appdata. 99% of all container templates do not mount such dies to appdata.

Unfortunately they do, probably due to lack of alternative? radarr, sonarr, Plex etc. all have Logs in appdata. Last Week I found a Logfile with 8GB Size in appdata. Probably some old culprit or misconfiguration, but still.

 

As Template-Creator I'd strongly encourage Data Integrity / Data Safety as the default setting. But me as a user should have the option to change that.

 

A better solution should come from unraid or a plugin rather than the container templates. (f.e. optional ramdisk-cache for specific shares or folders)

Edited by madejackson
Link to comment

All,

 

Why does this forum post have only a few contributors - does this imply that majority unraid users are not confronted with this problem of excessive writes anymore?

If so, how come, do they use a different setup or do people simply accept the early death of SSD's?

Link to comment

Not every docker container writes excessively to a log file, not every person uses chatty docker containers so those people are naturally not confronted or have only an amount of writes that they do not care/worry about. And I assume there are also people that do not notice it at all. I too did not immediately notice that myself, everything was working after all and my SSD being active seemed logical to me the more containers I introduced to my system. Only on closer look I saw that the amount of write accesses seemed unexpectedly high while none of my containers were particularly much used or active.

Link to comment
  • 3 months later...
  • 3 months later...
On 7/3/2021 at 11:03 AM, TexasUnraid said:
mount -vt tmpfs -o size=8G appramdisk /mnt/write-cache/appdata/appramdisk

 

Great guide! So far my containers have killed two SSDs before I saw this guide so I've just decided to stick to a spinner cache disk.  However I wanted to move a few containers to the ramdisk for performance.  I noticed this line using the mount point in `/mnt/write-cache/...` and I thought it was some kind of system path but it gave me an error when I ran it manually.  I'm just using `/mnt/cache/...` instead.

 

Was this a typo or have I incorrectly set this up?

Link to comment
On 12/12/2022 at 8:18 AM, funbubba said:

 

Great guide! So far my containers have killed two SSDs before I saw this guide so I've just decided to stick to a spinner cache disk.  However I wanted to move a few containers to the ramdisk for performance.  I noticed this line using the mount point in `/mnt/write-cache/...` and I thought it was some kind of system path but it gave me an error when I ran it manually.  I'm just using `/mnt/cache/...` instead.

 

Was this a typo or have I incorrectly set this up?

 

Opps, my bad, that is my custom path for my server, I have a separate cache pool just for the dockers and my normal cache is on raid5 15k drives. Yes, just swap it to whatever your cache path is.

 

Although I don't actually use the write cache for the array to be honest. I like controlling which disk the data is put on so I move most data to the array manually using the disk paths. Most of the network usage is either reading or directly to the cache.

 

The stuff that is not are generally backups and that is limited by SMB due to small files, so no reason to use cache.

Edited by TexasUnraid
Link to comment
  • 3 months later...
On 7/4/2021 at 1:03 AM, TexasUnraid said:

Like above first you need to log the appdata folder to see where the writes come from:

 

 This command will watch the appdata folder for writes and log them to /mnt/user/system/appdata_recentXXXXXX.txt

 

inotifywait -e create,modify,attrib,moved_from,moved_to --timefmt %c --format '%T %_e %w %f' -mr /mnt/user/appdata/*[!ramdisk] > /mnt/user/system/appdata_recent_modified_files_$(date +"%Y%m%d_%H%M%S").txt

 

 

I had problems with this inotify command. It ran and created the text file, but nothing was ever logged to the file.

I can only get it to log anything by removing *[!ramdisk] AND pointing it to mnt/cache/appdata.
Just curious if anyone can explain why this is?
I have my appdata share defined (cache prefer setting) as /mnt/cache/appdata for all containers and as the default in docker settings, but I would have thought that /mnt/user/appdata should still work?

Edited by Jorgen
Link to comment
  • 1 month later...
  • 5 months later...

Hello! Scratching my head on this one since I cannot seem to find anything on it. After running

 

inotifywait -e create,modify,attrib,moved_from,moved_to --timefmt %c --format '%T %_e %w %f' -mr /var/lib/docker > /mnt/user/system/recent_modified_files_$(date +"%Y%m%d_%H%M%S").txt

 

I seem to be getting an extreme amount of writes of CREATE MODIFY MOVED_FROM MOVED_TO status.new in the log

 

8 22:59:06 2023 CREATE /var/lib/docker/btrfs/subvolumes/404e15aa91364dd8df1dde64879edbe7ed537f5e2dabd6a35facd611c94a3029/run/s6-rc:s6-rc-init:AJBBHG/servicedirs/svc-nginx/supervise/ status.new
Sun Oct  8 22:59:06 2023 MODIFY /var/lib/docker/btrfs/subvolumes/404e15aa91364dd8df1dde64879edbe7ed537f5e2dabd6a35facd611c94a3029/run/s6-rc:s6-rc-init:AJBBHG/servicedirs/svc-nginx/supervise/ status.new
Sun Oct  8 22:59:06 2023 MOVED_FROM /var/lib/docker/btrfs/subvolumes/404e15aa91364dd8df1dde64879edbe7ed537f5e2dabd6a35facd611c94a3029/run/s6-rc:s6-rc-init:AJBBHG/servicedirs/svc-nginx/supervise/ status.new
Sun Oct  8 22:59:06 2023 MOVED_TO /var/lib/docker/btrfs/subvolumes/404e15aa91364dd8df1dde64879edbe7ed537f5e2dabd6a35facd611c94a3029/run/s6-rc:s6-rc-init:AJBBHG/servicedirs/svc-nginx/supervise/ status

Log and diagnostics from server attached. Would a delete and recreation of my docker image solve this? Poor cache drive has gotten 3TB written in the last few days. I am on 6.10.3 and have already deleted and formated my cache pool.

recent_modified_files_20231008_225740.txt tower-diagnostics-20231008-2308.zip

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.