Kagoromo Posted November 5, 2021 Posted November 5, 2021 Hi, big thanks to OP for figuring all these fixes out. I have a docker app that writes to a redis db (some 50 MB large) every few seconds and your ramdisk trick really helped taming the cache write. I just want to add a note that, if you do go ahead with the 'appramdisk' route, the container's appdata path also has to be modified to use that appramdisk on top of all the scripts that you created. It took me an embarrassingly long time to figure out why the cache was still constantly having ~10 MB/s written to despite the appramdisk script up and running. Hope that can help someone else! Quote
TexasUnraid Posted November 5, 2021 Author Posted November 5, 2021 3 hours ago, Kagoromo said: Hi, big thanks to OP for figuring all these fixes out. I have a docker app that writes to a redis db (some 50 MB large) every few seconds and your ramdisk trick really helped taming the cache write. I just want to add a note that, if you do go ahead with the 'appramdisk' route, the container's appdata path also has to be modified to use that appramdisk on top of all the scripts that you created. It took me an embarrassingly long time to figure out why the cache was still constantly having ~10 MB/s written to despite the appramdisk script up and running. Hope that can help someone else! I am embarrassed that I didn't include this in the OP lol. Don't feel too bad, I setup a new docker the other day and forgot to do this as well and took me longer then it should have to figure out the issue. 1 Quote
Crovaxon Posted December 15, 2021 Posted December 15, 2021 On 8/28/2021 at 12:41 PM, mgutt said: PS Instead of this: --mount type=tmpfs,destination=/tmp,tmpfs-mode=1777,tmpfs-size=256000000 I prefer creating a new path and link it to Unraid's /tmp, which is already a RAM-Disk: Another example: The tmp size is already limited to 50% of the RAM and finally no applicaton should write so much temporary data. My total size in Unraid is 57MB: du -sh /tmp 57M /tmp Hello, I was looking into this to have the tmp folders of some of my dockers reside in RAM as well and it seemed a straightforward way to set up like this. Initially it also worked but I have had reasons to reboot my machine and then suddenly my nextcloud docker refused to start up as it was not able to write to its mapped tmp folder, the perms on the nextcloud folder in temp were set up for the user root. After some digging I also found this: Seems to not be encouraged to use the /tmp folder for something like this. Quote
TexasUnraid Posted December 15, 2021 Author Posted December 15, 2021 Yeah, this is why I went with the more dedicated docker setup since it was designed for this use case. /tmp works fine for a lot of things but not everything. If it works, great go ahead and use it. If not then swap to the dedicated docker tmp. Quote
mgutt Posted December 15, 2021 Posted December 15, 2021 19 hours ago, Crovaxon said: the perms on the nextcloud folder in temp were set up for the user root. Yes, I have this problem, too. I think this is a bug of docker or unRAID as creating the dir through the docker while installing it, works as expected, but not if the dirs are missing and the container is only started. Quote
Crovaxon Posted December 15, 2021 Posted December 15, 2021 I have switched to the extra parameters Variant of creating a ramdisk for my chatty and active containers, that works just as well for now. Finally understanding better how to take advantage of my added cache drive and how to help it not be bombarded as well, I finally managed to get my array disks to stay spun down. Now to keep a good eye on what is interacting how much with my cache drive on a daily basis and I'll hope to have my drive wear under good control soon. Quote
madejackson Posted May 25, 2022 Posted May 25, 2022 (edited) I want to commit my two cents to this thread. Thanks everyone for their commitement to this thread. I have now come up with my own version of a solution based on OP's and mgutt's solutions. My Solution is also 3-step based but more streamlined in my opinion. Step 1: Get rid of all uneccecary log and temp writes in appdata. I searched for every file- or folderchange containing "log","temp" or "cache" in appdata using inotify. inotifywait -e create,modify,attrib,moved_from,moved_to --timefmt %c --format '%T %_e %w %f' -mr /mnt/user/appdata --includei="(cache)|(te?mp)|(logs?)" Then you can mount this to tmpfs in bad behaving containers. Easiest way is to add it as "--tmpfs <path>" into Extra Parameters. mgutts command works as well. On 8/17/2021 at 4:38 PM, mgutt said: PS I'm using the same trick for Plex and changed the transcode path to /tmp: Step 2: Move /var/lib/docker to ramdisk and create ramdisk in appdatata for regularly written files. On 8/17/2021 at 4:38 PM, mgutt said: I found an alternative solution: https://forums.unraid.net/bug-reports/stable-releases/683-unnecessary-overwriting-of-json-files-in-dockerimg-every-5-seconds-r1079/?tab=comments#comment-15472 Thanks mgutt for the hard part of the script in the go-file. I allowed myself to extend it a bit to allow a ramdisk on /mnt/user/appdata/appramdisk/ram. This is basically part 3 of OP's post but included in mgutt's solution. Add the script, create the folders in appdata and reboot. Spoiler create the neccecary directories: mkdir /mnt/cache/appdata/appramdisk mkdir /mnt/cache/appdata/appramdisk/ram mkdir /mnt/cache/appdata/appramdisk/disk chmod 777 /mnt/cache/appdata/appramdisk chmod 777 /mnt/cache/appdata/appramdisk/ram chmod 777 /mnt/cache/appdata/appramdisk/disk modified script for go-file: # ------------------------------------------------- # RAM-Disk for Docker json/log files # ------------------------------------------------- # create RAM-Disk on starting the docker service sed -i '/^ echo "starting \$BASE ..."$/i \ # move json/logs to ram disk\ rsync -aH --delete /var/lib/docker/containers/ ${DOCKER_APP_CONFIG_PATH%/}/containers_backup\ mount -t tmpfs tmpfs /var/lib/docker/containers\ rsync -aH --delete ${DOCKER_APP_CONFIG_PATH%/}/containers_backup/ /var/lib/docker/containers\ mount -t tmpfs tmpfs /mnt/user/appdata/appramdisk/ram\ rsync -aH --delete /mnt/user/appdata/appramdisk/disk/ /mnt/user/appdata/appramdisk/ram\ logger -t docker RAM-Disk created' /etc/rc.d/rc.docker # remove RAM-Disk on stopping the docker service sed -i '/^ # tear down the bridge$/i \ # backup json/logs and remove RAM-Disk\ rsync -aH --delete /var/lib/docker/containers/ ${DOCKER_APP_CONFIG_PATH%/}/containers_backup\ umount /var/lib/docker/containers\ rsync -aH --delete ${DOCKER_APP_CONFIG_PATH%/}/containers_backup/ /var/lib/docker/containers\ rsync -aH --delete /mnt/user/appdata/appramdisk/ram/ /mnt/user/appdata/appramdisk/disk\ umount /mnt/user/appdata/appramdisk/ram\ logger -t docker RAM-Disk removed' /etc/rc.d/rc.docker # Automatically backup Docker RAM-Disk sed -i '/^<?PHP$/a \ $sync_interval_minutes=30;\ if ( ! ((date('i') * date('H') * 60 + date('i')) % $sync_interval_minutes) && file_exists("/var/lib/docker/containers")) {\ exec("mkdir /var/lib/docker_bind");\ exec("mount --bind /var/lib/docker /var/lib/docker_bind");\ exec("rsync -aH --delete /var/lib/docker/containers/ /var/lib/docker_bind/containers");\ exec("umount /var/lib/docker_bind");\ exec("rmdir /var/lib/docker_bind");\ exec("rsync -aH --delete /mnt/user/appdata/appramdisk/ram/ /mnt/user/appdata/appramdisk/disk");\ exec("logger -t docker RAM-Disk synced");\ }' /usr/local/emhttp/plugins/dynamix/scripts/monitor Don't forget to add an exclusion for appramdisk/ram in the "Backup/Restore Appdata"-Plugin, otherwise the backup could fail ( I need to test this more, still better safe than sorry). Step 3: rsync desired folders/files to appdata/ram and change docker templates accordingly. Using inotify once again in find candidates for changing the mount to ramdisk: inotifywait -e create,modify,attrib,moved_from,moved_to --timefmt %c --format '%T %_e %w %f' -mr /mnt/user/appdata --exclude appramdisk example for radarr: rsync -avH /mnt/user/appdata/radarr /mnt/user/appdata/appramdisk/ram/ --exclude MediaCover --exclude Backup During the whole step 3 you should always consider less is more as you don't want to waste your precious limited ram. hence why I exclude MediaCover in the example above as this holds about 1GB of data for me. The rest of radarr-appdata is a mere 36MB. Also, If only 1 file is accessed often, try to change the path of this file into a seperate folder and only sync this folder to ram. I did this for home-assistant_v2.db for example. I'm sure I will come up with some updates in the future and will post this accordingly. Cheers. Update concerning Plex / or whitespaces in paths: I experienced some issues with Plex. I suspect it's due to whitespaces in the paths. I rsynced the folder to ramdisk, deleted it and created corresponding symbolic links to /Logs and /Databases in the corresponding paths and then created the paths in my appramdisk/ram-folder which resolved the issue completely. ln -s /Logs /mnt/user/appdata/PlexMediaServer/Library/Application\ Support/Plex\ Media\ Server/Logs/ Edited June 1, 2022 by madejackson 1 Quote
TexasUnraid Posted May 25, 2022 Author Posted May 25, 2022 Still wish this could be turned into a plugin, would make things so much simpler and is a prime candidate IMHO. 1 Quote
mgutt Posted June 1, 2022 Posted June 1, 2022 On 5/25/2022 at 11:44 AM, madejackson said: I searched for every file or folder containing "log","temp" or "cache" in appdata using inotify. You are searching in appdata. 99% of all container templates do not mount such dirs to appdata. The above example mounts the /tmp dir inside of the plex container to the containers RAM. Quote
madejackson Posted June 1, 2022 Posted June 1, 2022 (edited) 1 hour ago, mgutt said: You are searching in appdata. 99% of all container templates do not mount such dies to appdata. Unfortunately they do, probably due to lack of alternative? radarr, sonarr, Plex etc. all have Logs in appdata. Last Week I found a Logfile with 8GB Size in appdata. Probably some old culprit or misconfiguration, but still. As Template-Creator I'd strongly encourage Data Integrity / Data Safety as the default setting. But me as a user should have the option to change that. A better solution should come from unraid or a plugin rather than the container templates. (f.e. optional ramdisk-cache for specific shares or folders) Edited June 1, 2022 by madejackson Quote
bramv101 Posted June 1, 2022 Posted June 1, 2022 All, Why does this forum post have only a few contributors - does this imply that majority unraid users are not confronted with this problem of excessive writes anymore? If so, how come, do they use a different setup or do people simply accept the early death of SSD's? Quote
Crovaxon Posted June 1, 2022 Posted June 1, 2022 Not every docker container writes excessively to a log file, not every person uses chatty docker containers so those people are naturally not confronted or have only an amount of writes that they do not care/worry about. And I assume there are also people that do not notice it at all. I too did not immediately notice that myself, everything was working after all and my SSD being active seemed logical to me the more containers I introduced to my system. Only on closer look I saw that the amount of write accesses seemed unexpectedly high while none of my containers were particularly much used or active. Quote
TexasUnraid Posted June 1, 2022 Author Posted June 1, 2022 yeah that^. Most people either do not use dockers or if they do do not keep track of the writes to the SSD's until they have an issue. Quote
kizer Posted September 3, 2022 Posted September 3, 2022 Or some of us simply don't know what to do with the problem in the first place. Quote
funbubba Posted December 12, 2022 Posted December 12, 2022 On 7/3/2021 at 11:03 AM, TexasUnraid said: mount -vt tmpfs -o size=8G appramdisk /mnt/write-cache/appdata/appramdisk Great guide! So far my containers have killed two SSDs before I saw this guide so I've just decided to stick to a spinner cache disk. However I wanted to move a few containers to the ramdisk for performance. I noticed this line using the mount point in `/mnt/write-cache/...` and I thought it was some kind of system path but it gave me an error when I ran it manually. I'm just using `/mnt/cache/...` instead. Was this a typo or have I incorrectly set this up? Quote
TexasUnraid Posted December 13, 2022 Author Posted December 13, 2022 (edited) On 12/12/2022 at 8:18 AM, funbubba said: Great guide! So far my containers have killed two SSDs before I saw this guide so I've just decided to stick to a spinner cache disk. However I wanted to move a few containers to the ramdisk for performance. I noticed this line using the mount point in `/mnt/write-cache/...` and I thought it was some kind of system path but it gave me an error when I ran it manually. I'm just using `/mnt/cache/...` instead. Was this a typo or have I incorrectly set this up? Opps, my bad, that is my custom path for my server, I have a separate cache pool just for the dockers and my normal cache is on raid5 15k drives. Yes, just swap it to whatever your cache path is. Although I don't actually use the write cache for the array to be honest. I like controlling which disk the data is put on so I move most data to the array manually using the disk paths. Most of the network usage is either reading or directly to the cache. The stuff that is not are generally backups and that is limited by SMB due to small files, so no reason to use cache. Edited December 13, 2022 by TexasUnraid Quote
Jorgen Posted March 14, 2023 Posted March 14, 2023 (edited) On 7/4/2021 at 1:03 AM, TexasUnraid said: Like above first you need to log the appdata folder to see where the writes come from: This command will watch the appdata folder for writes and log them to /mnt/user/system/appdata_recentXXXXXX.txt inotifywait -e create,modify,attrib,moved_from,moved_to --timefmt %c --format '%T %_e %w %f' -mr /mnt/user/appdata/*[!ramdisk] > /mnt/user/system/appdata_recent_modified_files_$(date +"%Y%m%d_%H%M%S").txt I had problems with this inotify command. It ran and created the text file, but nothing was ever logged to the file. I can only get it to log anything by removing *[!ramdisk] AND pointing it to mnt/cache/appdata. Just curious if anyone can explain why this is? I have my appdata share defined (cache prefer setting) as /mnt/cache/appdata for all containers and as the default in docker settings, but I would have thought that /mnt/user/appdata should still work? Edited March 14, 2023 by Jorgen Quote
Squid Posted April 14, 2023 Posted April 14, 2023 @mgutt The script and it's modification to monitor causes issues on 6.12 1 Quote
mgutt Posted April 14, 2023 Posted April 14, 2023 2 hours ago, Squid said: The script and it's modification to monitor causes issues on 6.12 He used an old version. Quote
nitrodragon Posted October 9, 2023 Posted October 9, 2023 Hello! Scratching my head on this one since I cannot seem to find anything on it. After running inotifywait -e create,modify,attrib,moved_from,moved_to --timefmt %c --format '%T %_e %w %f' -mr /var/lib/docker > /mnt/user/system/recent_modified_files_$(date +"%Y%m%d_%H%M%S").txt I seem to be getting an extreme amount of writes of CREATE MODIFY MOVED_FROM MOVED_TO status.new in the log 8 22:59:06 2023 CREATE /var/lib/docker/btrfs/subvolumes/404e15aa91364dd8df1dde64879edbe7ed537f5e2dabd6a35facd611c94a3029/run/s6-rc:s6-rc-init:AJBBHG/servicedirs/svc-nginx/supervise/ status.new Sun Oct 8 22:59:06 2023 MODIFY /var/lib/docker/btrfs/subvolumes/404e15aa91364dd8df1dde64879edbe7ed537f5e2dabd6a35facd611c94a3029/run/s6-rc:s6-rc-init:AJBBHG/servicedirs/svc-nginx/supervise/ status.new Sun Oct 8 22:59:06 2023 MOVED_FROM /var/lib/docker/btrfs/subvolumes/404e15aa91364dd8df1dde64879edbe7ed537f5e2dabd6a35facd611c94a3029/run/s6-rc:s6-rc-init:AJBBHG/servicedirs/svc-nginx/supervise/ status.new Sun Oct 8 22:59:06 2023 MOVED_TO /var/lib/docker/btrfs/subvolumes/404e15aa91364dd8df1dde64879edbe7ed537f5e2dabd6a35facd611c94a3029/run/s6-rc:s6-rc-init:AJBBHG/servicedirs/svc-nginx/supervise/ status Log and diagnostics from server attached. Would a delete and recreation of my docker image solve this? Poor cache drive has gotten 3TB written in the last few days. I am on 6.10.3 and have already deleted and formated my cache pool. recent_modified_files_20231008_225740.txt tower-diagnostics-20231008-2308.zip Quote
Philsko Posted May 10 Posted May 10 i just stumbled upon this topic. (10 march 2024) The initial post is from 2021 since then we moved from unraid Version 6.8.3 to 6.12.10. Is this still relevant ? My ssd cache drive died in 2022 and i would prefer to not have it happen again ^^ 1 Quote
gyrene2083 Posted September 19 Posted September 19 I to would like to know if this is still the way to go as I just saw my cache being written to non stop. Quote
PhilBarker Posted October 12 Posted October 12 (edited) Seeing as a few people have asked if this is stil relevant I thought I'd comment - yes, yes it is, VERY I built my plex server with 2 x 1tb SSD's, I believe they were 870 evo's Unfortunately they started throwing write errors in under a year. I know there was some weird firmware issues with them and a lot of noise about batches failing prematurely. So in the bin they went, I couldn't be bothered RMA'ing them, but I didn't have a lot of money to replace them so just went with Crucial BX500's Those are now 11months old and the first one has gone into a failing state with write errors🤦♂️ They are probably getting RMA'd because they're under a year old and have 80TBW, where they're advertised to cope with 300+ but anyway.... I did some digging because it feels like the cache pool with the 2 SSD's in are just constantly being written to. This guide is amazing, I setup the inotify and sat watching the file to see what was being written. Pihole was insanely noisy, constant pihole.log stuff, so I ended up pointing `/var/log` at `/tmp/docker/pihole/log` so that it would write directly to the Unraid tmp folder. As per @nitrodragon above, I then found it was constantly writing to the supervise status so pointed `/run` to `/tmp/docker/pihole/run` This left the files being written seeming to mostly be healthchecks and the unraid logs so I went through all my containers and set --health-interval=60m --log-driver syslog --log-opt syslog-address=udp://127.0.0.1:541 This dropped the writes massively, logs still work fine. I don't really care about healthchecks as I have uptimekuma monitoring all my essential services so once an hour is fine. At this point watching the writes it had dropped down to virtually nothing, just database activity in nextcloud which is kinda the only reason I used SSD's in the first place as DB's running on slow spinning sata drives is unusable. I then went on to replace the dying drive and this time bought 2 x 1tb western digital red drives. The ZFS pool drive replacement was a doddle, resilvered the new drive in 30 minutes. Then I swapped the other BX500 for a new WD red too. All back up and running, a massive drop in writes so hopefully these drives last longer. They have a much higher TBW and 5 year warranty. I don't know if it's due to the BX500's generally being utter shite or the drop in writes but the whole docker system seems 100x faster now. When I start the machine up it would take about 10 minutes for all my containers to come up, now it's 2 minutes. The UI and everything feels much snappier. I wish this guide was pinned somewhere or I'd seen it earlier in my 3 year Unraid journey, would have saved £400 in SSD's being thrashed to death writing pointless log files 😅 Edited October 12 by PhilBarker Quote
smol Posted October 14 Posted October 14 Does the new version 7 beta3 offer anything new regarding this, and or. is there some plugin created to this effect possibly? Or is it still a lot of fiddling involved (as in not user friendly)? There should be a 3 tier storage solution in Unraid from the beginning. Tier 1: Array pool for spinner disks. Tier 2: Pool for nVme and SSD disks Tier 3: a RAM disk This way a user could create a truly fast system and equally robust system. I guess one can hope for version 8.0. Quote
DaifukuPanda Posted October 23 Posted October 23 On 10/12/2024 at 2:32 PM, PhilBarker said: All back up and running, a massive drop in writes so hopefully these drives last longer. They have a much higher TBW and 5 year warranty. You could also use enterprise U.2 NVMe drives which offers significantly more endurance. https://www.intel.com/content/dam/www/public/us/en/documents/product-briefs/ssd-dc-p4600-brief.pdf Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.