RAM-Disk for Docker status/log files


Recommended Posts

To reduce wear on your SSD this script moves your docker status and log files into your RAM. This script syncs every 30 minutes the RAM content to your SSD to reduce loss of log files.

 

Paste the following script in your /boot/config/go File on your usb flash drive or create a script with the user scripts plugin which gets only executed on first array start (check the unraid version info in the script, maybe it was updated some pages later):

 

 

Use the execute in the background button, to install the script the very first time. Now go to settings > docker and set docker to "no" and then to "yes" again.

 

How it works

 

Docker service is started

- It creates the path /mnt/<path_to_your_docker_data>/containers_backup

- It copies all files from /var/lib/docker/containers to /containers_backup

- It creates a RAM disk /var/lib/docker/containers (on top of the already existing path, so the already existing path is not deleted)

- It copies all files from /containers_backup back to the now empty RAM disk /var/lib/docker/containers

 

Docker service is stopped

- Reverses steps from before

 

Every 30 minutes

- It creates the path /var/lib/docker_bind

- It mounts the path /var/lib/docker to /var/lib/docker_bind (this trick allows us to access the local path /var/lib/docker/containers)

- It copies all files from the RAM Disk /var/lib/docker/containers to /var/lib/docker_bind/containers (which is the local /var/lib/docker/containers path)

 

Reduce RAM Usage

If you fear to much RAM usage, you can change your docker settings as follows:

image.png.a5b1b51ed5ebab7fcd37b98bac028be4.png

 

By that each container can only create a single log file which is limited to 50 MB.

 

How much RAM is used

Usually less then 512 MB:

# df -h /var/lib/docker/containers
Filesystem      Size  Used Avail Use% Mounted on
tmpfs            32G  369M   31G   2% /var/lib/docker/containers

 

Does it really help?

For me it made a huge change (most of the time absolutely no writes on my SSD). But read the reactions of other users:

https://forums.unraid.net/bug-reports/stable-releases/683-unnecessary-overwriting-of-json-files-in-dockerimg-every-5-seconds-r1079/?do=findComment&comment=15472

 

  • Like 1
  • Thanks 5
Link to comment

Thanks for sharing this. Using your script (with the minor edit that @ICDeadPpl highlighted--it was erroring out for me as well without adding the backslash).

 

What is your recommendation, @mgutt, for identifying what is making /var/docker/overlay2 writes? I'm getting pretty consistent writes still even after incorporating the script.

Edited by kaiguy
more words
Link to comment
On 3/10/2023 at 9:38 PM, kaiguy said:

What is your recommendation, @mgutt, for identifying what is making /var/docker/overlay2 writes?

Repeat this command every minute or so:

 

find /var/lib/docker -type f -not -path "*/diff*" -print0 | xargs -0 stat --format '%Y:%.19y %n' | sort -nr | cut -d: -f2- 2> /dev/null | head -n30 | sed -e 's|/merged|/...|; s|^[0-9-]* ||'

 

Now you should know which files are permanently updated (take a look at the timestamps).

 

 Lets say there is one file updated frequently like this one

/var/lib/docker/overlay2/b04890a87507090b14875f716067feab13081dea9cf879aade865588f14cee67/merged/tmp/hsperfdata_abc/296

 

Problem: You don't know which container is writing to the path "b04890...". So you need this command to obtain this information:

 

csv="CONTAINER;PATHS\n"; for f in /var/lib/docker/image/*/layerdb/mounts/*/mount-id; do subid=$(cat $f); idlong=$(dirname $f | xargs basename); id="$(echo $idlong | cut -c 1-12)"; name=$(docker ps --format "{{.Names}}" -f "id=$id"); [[ -z $name ]] && continue; csv+="\n"$(printf '=%.0s' {1..20})";"$(printf '=%.0s' {1..100})"\n"; [[ -n $name ]] && csv+="$name;" csv+="/var/lib/docker/(btrfs|overlay2).../$subid\n"; csv+="$id;"; csv+="/var/lib/docker/containers/$idlong\n"; for vol in $(docker inspect -f '{{ range .Mounts }}{{ if eq .Type "volume" }}{{ .Destination }}{{ printf ";" }}{{ .Source }}{{ end }}{{ end }}' $id); do csv+="$vol\n"; done; done; echo ""; echo -e $csv | column -t -s';'; echo "";

 

Lets say it returns this and now you know which container caused this:

image.png.780472d5cdb363071434fdbaa50057cb.png

 

Now you could think about how to solve your problem. Maybe the container is writing logs inside the container volume? Or permanently writes to a /tmp dir? Then you could add this path to your container settings and link it to Unraid's "/tmp" (Which is already a RAM Disk - and those files will be lost on reboot of course):

image.png.c99e428dc1efce80a3aa6ef3afd3c883.png.c865d7d60cf32f2aa3e35979a45cbb6a.png

 

Another step could be to analyze most recent writes in the appdata directory:

find /mnt/cache/appdata -type f -print0 | xargs -0 stat --format '%Y :%y %n' | sort -nr | cut -d: -f2- | head -n30

 

By that I found out that Nginx Proxy Manager wrote permanently logs:

 

image.png.4d4b0b4c40218bbfe0b12f15432db4c2.png

 

And again I created a new path to Unraid's /tmp path (which is already a RAM-Disk):

 

image.png.b30407eec3853f7d6662f357e3f2bae2.png

 

That's how I reduced my writes by 90%.

 

More information in this German thread:

https://forums.unraid.net/topic/112617-ssd-abnutzung-maßgeblich-reduzieren/

  • Like 2
  • Thanks 1
Link to comment
  • 2 weeks later...

I've used the ideas here and I have definitely reduced my disk writes on v.6.11.5 by at least 60%.

I'm still trying to catch more of them and minimize/move to RAM. But I'm wondering how many writes people see during idle?

 

I've tracked mine during the last 4 days and I'm seeing between 300k-600k writes per 24h on 6-7 dockers + HomeAssistant VM. I honestly don't know if that's a lot or not? Should I be satisfied or do I need to go deeper?

Link to comment

Hey,

I installed a fresh 6.12.0-rc2 on a test system to test a few things.
Here I noticed when I use the script "RAM disk for Docker json/log files v1.3" it comes to the error:

 

Mar 22 05:48:01 TOWER crond[1018]: exit status 255 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null
Mar 22 05:49:01 TOWER crond[1018]: exit status 255 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null
Mar 22 05:50:01 TOWER crond[1018]: exit status 255 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null
Mar 22 05:51:01 TOWER crond[1018]: exit status 255 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null

 

This error occurs every full minute.
After I removed the script and restarted the system, the error no longer occurs.
Greetings Patty
 

Link to comment
On 3/22/2023 at 2:04 PM, Patty92 said:

After I removed the script and restarted the system, the error no longer occurs.

Thank you for your report. This bug has been fixed. I simply forgot to use quotes around "i" and "H" in this line:

image.png.39ae0cf8c17365f2cff3d8d2a19b5455.png

 

It seems the recent PHP version is a little bit more picky ^^

  • Like 1
  • Thanks 1
Link to comment
  • 4 weeks later...

Just wanted to say thank you @mgutt! I had an earlier version of your RAM Disk installed. Wasn't even sure of the version, as it didn't have the version number in the comments. Must have been from a couple of years ago. In any case, after upgrading to 6.11 recently, I noticed that my Disk 1 would have writes to it frequently, which wakes it up along with my two parity disks. This is despite me emptying all contents from Disk 1, and all of my Dockers running from the SSD cache pool. Also, new writes should have gone to the cache pool and not the array. I spent hours watching iotop and dstat, and I was about to pull my hair out when I noticed that Disk 1 would only wake up when certain Docker container is running (specifically DiskSpeed and HomeAssistant). On a whim, I looked to see if there is a newer version of the RAM Disk available, and found this thread. I updated the RAM Disk code, and viola! No more disks waking up! Still not sure why certain Dockers are writing directly to the array or why it's always Disk 1, but I'm glad the new code fixed the issue :)

Edited by Phoenix Down
Link to comment
  • 4 weeks later...

v1.4 of this script will not play nice with 6.12.0-rc6 (and later?).

 

My docker share (on zfs pool) with docker-xfs.img would not allow it to unmount the docker mount (busy) when rebooting the server or stopping the docker service. Ends up with unclean shutdown when unraid forces the reboot (starts parity check when rebooted).

 

Edited by Niklas
Link to comment
  • 4 weeks later...
  • 2 weeks later...

I finally tried it today with 6.12.2 but unlike the post above me I still get an error message when stopping Docker:

 

Jul  8 19:31:15 smartserver root: stopping dockerd ...
Jul  8 19:31:17 smartserver root: umount: /var/lib/docker_bind: not mounted.
Jul  8 19:31:17 smartserver docker: Error: RAM-Disk bind unmount failed while docker stops!
Jul  8 19:31:17 smartserver docker: RAM-Disk removed
Jul  8 19:31:17 smartserver emhttpd: shcmd (28898): umount /var/lib/docker

 

The only difference is the default sync time of 30 minutes and instead of a ssd formatted in XFS I use BTRFS. No errors seem to occur during docker startup and the sync process.

Link to comment
  • 2 weeks later...

Seems like a logic issue when a mount point (directory) is present and not a "mount".  The code in this line is looking for a directory:

if [[ -d /var/lib/docker_bind ]]; then umount /var/lib/docker_bind || logger -t docker Error: RAM-Disk bind unmount failed while docker stops!; fi\

 

 

But maybe something like this would work better since it actually checks for a mount:

mount | awk '{if ($3 == "/var/lib/docker_bind") { exit 0}} ENDFILE{exit -1}' && umount /var/lib/docker_bind || logger -t docker Error: RAM-Disk bind unmount failed while docker stops!\

Of course, I cant really test this since I'm not using the script yet, I'm just following along until I upgrade and add this afterward or hope its part of future "improvements". ;)

 

Edited by rkotara
Link to comment

Either im going completely coo-coo or in 12.3 something changed. I tested it with 12.2 a while back and it just worked but now ive updated to 12.3 and the /etc/rc.d/rc.docker file wont get edited at all. /usr/local/emhttp/plugins/dynamix/scripts/monitor is still recieving the change.


Update:
I can gladly announce, i am not going crazy. The comments in the file are gone. 

 

sed -i '/^  echo "starting \$BASE ..."$/i \

can be worked around with 
 

sed -i '/^  echo "starting \$DOCKERD ..."$/i \


but stopping docker has changed the 
 

sed -i '/^  # tear down the bridge$/i \

into a 15 count loop that has the comment. I would at least expect that one to be inserted there but for some reason it didnt. So there might be more funky magic going on.


Update #2

Got it to work

 

# -------------------------------------------------
# RAM-Disk for Docker json/log files v1.4 for 6.12.3
# -------------------------------------------------

# create RAM-Disk on starting the docker service
sed -i '/^  echo "starting \$DOCKERD ..."$/i \
  # move json/logs to ram disk\
  rsync -aH --delete /var/lib/docker/containers/ ${DOCKER_APP_CONFIG_PATH%/}/containers_backup\
  mountpoint -q /var/lib/docker/containers || mount -t tmpfs tmpfs /var/lib/docker/containers || logger -t docker Error: RAM-Disk could not be mounted!\
  rsync -aH --delete ${DOCKER_APP_CONFIG_PATH%/}/containers_backup/ /var/lib/docker/containers\
  logger -t docker RAM-Disk created' /etc/rc.d/rc.docker

# remove RAM-Disk on stopping the docker service
sed -i '/^        # tear down the bridge/i \
  # backup json/logs and remove RAM-Disk\
  rsync -aH --delete /var/lib/docker/containers/ ${DOCKER_APP_CONFIG_PATH%/}/containers_backup\
  umount /var/lib/docker/containers || logger -t docker Error: RAM-Disk could not be unmounted!\
  rsync -aH --delete ${DOCKER_APP_CONFIG_PATH%/}/containers_backup/ /var/lib/docker/containers\
  if [[ -d /var/lib/docker_bind ]]; then umount /var/lib/docker_bind || logger -t docker Error: RAM-Disk bind unmount failed while docker stops!; fi\
  logger -t docker RAM-Disk removed' /etc/rc.d/rc.docker

# Automatically backup Docker RAM-Disk
sed -i '/^<?PHP$/a \
$sync_interval_minutes=30;\
if ( ! ((date("i") * date("H") * 60 + date("i")) % $sync_interval_minutes) && file_exists("/var/lib/docker/containers")) {\
  exec("\
    [[ ! -d /var/lib/docker_bind ]] && mkdir /var/lib/docker_bind\
    if ! mountpoint -q /var/lib/docker_bind; then\
      if ! mount --bind /var/lib/docker /var/lib/docker_bind; then\
        logger -t docker Error: RAM-Disk bind mount failed!\
      fi\
    fi\
    if mountpoint -q /var/lib/docker_bind; then\
      rsync -aH --delete /var/lib/docker/containers/ /var/lib/docker_bind/containers && logger -t docker Success: Backup of RAM-Disk created.\
      umount -l /var/lib/docker_bind\
    else\
      logger -t docker Error: RAM-Disk bind mount failed!\
    fi\
  ");\
}' /usr/local/emhttp/plugins/dynamix/scripts/monitor

 

Edited by Mainfrezzer
  • Thanks 2
Link to comment
On 7/17/2023 at 11:30 PM, rkotara said:

Seems like a logic issue when a mount point (directory) is present and not a "mount".  The code in this line is looking for a directory:

if [[ -d /var/lib/docker_bind ]]; then umount /var/lib/docker_bind || logger -t docker Error: RAM-Disk bind unmount failed while docker stops!; fi\

 

 

But maybe something like this would work better since it actually checks for a mount:

mount | awk '{if ($3 == "/var/lib/docker_bind") { exit 0}} ENDFILE{exit -1}' && umount /var/lib/docker_bind || logger -t docker Error: RAM-Disk bind unmount failed while docker stops!\

Of course, I cant really test this since I'm not using the script yet, I'm just following along until I upgrade and add this afterward or hope its part of future "improvements". ;)

 

 

This doesn't work for me with 6.12.3. I can't see any logs related to the RAM disk script when replacing this line and stopping Docker.

Link to comment
32 minutes ago, kennymc.c said:

 

This doesn't work for me with 6.12.3. I can't see any logs related to the RAM disk script when replacing this line and stopping Docker.

mhmm thats odd. i did test it with a brand new usb/install.

Is the /etc/rc.d/rc.docker file changed at all?

You gotta excuse me for my awful picture quality.

bb2c8c12-9d27-4386-86b7-13efabac98da.jpg

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.