maxstevens2 Posted February 28, 2021 Share Posted February 28, 2021 I do worry a bit about the solution as this guy 'lost' his dockers. Which would in my case mean I lose most of the Home Assistant configuration (not configuration files, but things like built-in integrations, internal user settings etc.) On 5/12/2020 at 7:28 PM, hovee said: I tried following your tutorial, but after reboot it tells me I don't have any docker containers installed. It shows me the docker service is running. Checking the logs I can see it created the softlinks. If I remove that entry to copy the rc.docker file in the /boot/config/go directory everything works again after a reboot. However, then it puts it back to original and uses loop2. Quote Link to comment
Niklas Posted February 28, 2021 Author Share Posted February 28, 2021 (edited) 6 minutes ago, maxstevens2 said: I do worry a bit about the solution as this guy 'lost' his dockers. Which would in my case mean I lose most of the Home Assistant configuration (not configuration files, but things like built-in integrations, internal user settings etc.) You don't need the solution mentioned there if running 6.9-rc2. In 6.9 you can select a dir to store your docker data instead of the docker.img (in the gui). You will lose all of your dockers but you can easily add them back with all your settings intact by using the "ADD CONTAINER" button. Select your docker from the "Template:" dropdown box and click apply. Edited February 28, 2021 by Niklas Quote Link to comment
jay010101 Posted February 28, 2021 Share Posted February 28, 2021 Hey guys. I had this issue and moved to the newest beta and it was fixed. You can also use the below command but its not persistant after boot. mount -o remount -o space_cache=v2 /mnt/cache I hope this helps Quote Link to comment
maxstevens2 Posted February 28, 2021 Share Posted February 28, 2021 2 minutes ago, Niklas said: You don't need the solution mentioned there if running 6.9-rc2. In 6.9 you can select a dir to store your docker data instead of the docker.img in the gui. You will lose all of your dockers but you can easily add them back with all your settings intact by using the "ADD CONTAINER" button. Select your docker from the "Template:" dropdown box and click apply. Has 6.9 been stable for you? Been waiting for the new big release, but if it stable and the docker can be fixed, that would be perfect! Quote Link to comment
jay010101 Posted February 28, 2021 Share Posted February 28, 2021 6.9 no issues hereI Quote Link to comment
Niklas Posted February 28, 2021 Author Share Posted February 28, 2021 (edited) 2 minutes ago, maxstevens2 said: Has 6.9 been stable for you? Been waiting for the new big release, but if it stable and the docker can be fixed, that would be perfect! 57 days uptime without any problems. I just moved from docker.img to directory like three hours ago, running fine. 5 minutes ago, jay010101 said: Hey guys. I had this issue and moved to the newest beta and it was fixed. You can also use the below command but its not persistant after boot. mount -o remount -o space_cache=v2 /mnt/cache I hope this helps As i understand, that command is not needed anymore with the 6.9-rc2. It only mitigates some of the writing amplifications but it's a start. Edited February 28, 2021 by Niklas Quote Link to comment
maxstevens2 Posted February 28, 2021 Share Posted February 28, 2021 Here it's now 23:29. So I am trying to get this fixed before Monday morning. I will update Unraid asap. I will ofcourse backup the image and such. 8 minutes ago, Niklas said: You will lose all of your dockers but you can easily add them back with all your settings intact by using the "ADD CONTAINER" button. Select your docker from the "Template:" dropdown box and click apply. Is there a way to transport/move the dockers from the img to the 'new location'? Or is what you said the way to get the docker back with it's settings? Quote Link to comment
Niklas Posted February 28, 2021 Author Share Posted February 28, 2021 1 minute ago, maxstevens2 said: Here it's now 23:29. So I am trying to get this fixed before Monday morning. I will update Unraid asap. I will ofcourse backup the image and such. Is there a way to transport/move the dockers from the img to the 'new location'? Or is what you said the way to get the docker back with it's settings? Same time-zone as me. Yes, that what I said. You can try the "add container" button now before upgrading and you will probably see what I mean. Your dockers with settings will be in the "Template:" dropdown list (under "User templates"). Quote Link to comment
maxstevens2 Posted February 28, 2021 Share Posted February 28, 2021 I might have found out that Home Assistant doesn't actually 'abuse' the docker image file as it's storage location. So I am more hopeful now. Will now shut dockers down and backup the image. Then need to have 1 docker online at 00:00 CET. After that upgrade time. I'll keep you updated lol. Thanks for the help so far though!! Quote Link to comment
Niklas Posted February 28, 2021 Author Share Posted February 28, 2021 8 minutes ago, maxstevens2 said: I might have found out that Home Assistant doesn't actually 'abuse' the docker image file as it's storage location. So I am more hopeful now. Will now shut dockers down and backup the image. Then need to have 1 docker online at 00:00 CET. After that upgrade time. I'll keep you updated lol. Thanks for the help so far though!! Also running home assistant. The most abuse I saw was writing to the home-assistant_v2.db (recorder/history). I moved home-assistant_v2.db from the btrfs cache to my unassigned hdd in 6.8.3. With the multiple cache pool support in 6.9, I removed unassigned devices and added the hdd as a second cache pool formatted as XFS. Just for use with stuff that writes lots of data to save some ssd life. No critical stuff. Quote Link to comment
maxstevens2 Posted February 28, 2021 Share Posted February 28, 2021 Ah, my recorder/database for history is stored in MariaDB on docker... which is kinda how this all started. You might believe it is stupid to run it like that, but it ran on MariaDB inside a VM for months without any crazy writes on an old HDD... But I won't get too database'y. Time to upgrade now though! Quote Link to comment
Squid Posted February 28, 2021 Share Posted February 28, 2021 38 minutes ago, maxstevens2 said: Is there a way to transport/move the dockers from the img to the 'new location'? Or is what you said the way to get the docker back with it's settings? Rather than try and save the docker.img, just recreate it. Less hassles. Also, far easier rather than doing Add Container to add them back in one at a time to instead Apps, Previous Apps and check off what you want then hit Install. 1 Quote Link to comment
maxstevens2 Posted February 28, 2021 Share Posted February 28, 2021 So, Unraid updated quite easily. Only issue now is the following, but won't matter too much as 4/7 are just barebone nothing lol: The rest of the 'fixes' I will try tommorow (a.k.a. today) as the dockers don't impact a lot, but the VM's do. Will work on it later though. Weirdly the writing (according to Unraid's UI) doesn't seem to be in use that crazy much right now. Nearly 3MB/s write peak instead of 25 MB/s. See picture after (red line is the kinda docker-started line): Before the update it litterally spiked continously from 12 to 2 to 25 and sometimes (server at high-performance) to 54. Will very much keep an eye for it for now. Maybe I don't even need to change it. Or you guys have suggestions? Really have to give out probs already to @Niklas with the update suggestion. Otherwise I would have needed to apply the solution in the 'report' page 2. Good Nighty :') 1 Quote Link to comment
Niklas Posted February 28, 2021 Author Share Posted February 28, 2021 16 minutes ago, Squid said: Rather than try and save the docker.img, just recreate it. Less hassles. Also, far easier rather than doing Add Container to add them back in one at a time to instead Apps, Previous Apps and check off what you want then hit Install. Far easier, yes. I knew I had that information somewhere locked away in my brain. Thanks for pointing that out! I just moved from docker.img to directory and added my 20 containers back using the add container button. Wasted some time there. Quote Link to comment
maxstevens2 Posted February 28, 2021 Share Posted February 28, 2021 Maybe a bit off topic, but do you use dockers for Home Assistant MQTT @Niklas? My power usage is now litterally showing in kind off encrypted style in the log. lol Only when I am not on the website it won't show anything.. Quote Link to comment
Niklas Posted March 1, 2021 Author Share Posted March 1, 2021 20 minutes ago, maxstevens2 said: Maybe a bit off topic, but do you use dockers for Home Assistant MQTT @Niklas? My power usage is now litterally showing in kind off encrypted style in the log. lol Only when I am not on the website it won't show anything.. Not using MQTT (yet). I think it has something to do with networking. I don't know if this bug report could help 1 Quote Link to comment
S1dney Posted March 1, 2021 Share Posted March 1, 2021 7 hours ago, maxstevens2 said: Really have to give out probs already to @Niklas with the update suggestion. Otherwise I would have needed to apply the solution in the 'report' page 2. The GO (and service.d file) modifications are not needed anymore in unraid 9.2 as they basically created a way to do exactly that from the GUI. The GUI should now allow you to host docker files in their own directory directly on the cache (which what the workaround did via scripts instead of the GUI). Haven't moved there myself yet as I still see enough issues with rc2 for now. I believe that reformatting the cache is also advisable cause they have made some changes in the partition alignment there as well (you may or may not get lesser rights because of it, I'm using Samsung EVO's so I consider wiping and reformatting mine after the upgrade). Cheers! 1 Quote Link to comment
maxstevens2 Posted March 1, 2021 Share Posted March 1, 2021 (edited) So comming back to this now. I changed a few local things. - I've updated Unraid to the 6.9-rc2. - Home Assistant now only saves every 30 seconds to the database. I came up with these results in a 34 minute timepsand: All calculated, this generates around 47 GB of writes a week, which I find more casual for the disk (due to there being 3 of them). Recalculated this is around 4.7 Megabyte per minute and 0,07 per second (and around 0,149 on the btrfs-transacti). A yearly sum of this woul be around 2466 GB of writes (docker only). I think this is a need fit compared to the 20+ MB/s before nearly being continously. Which would be 1,296 (or just 1,2) TERABYTE per day at 15MB/s 24/7. As of now and the thing I reported earlier, I actually broke the log (/var/syslog is just empty). I guess this doesn't affect loopt but I will retest. Edit: Here you can clearly see the 30 seconds mark when Home Assistant Pushes its data out. Before the saving rate was every 10 seconds Edited March 1, 2021 by maxstevens2 Quote Link to comment
maxstevens2 Posted March 1, 2021 Share Posted March 1, 2021 Got it all fixed now, no crazy writes anymore, and no crazy action. I do try to make the usage of the MariaDB as little as i can, but still have some performance. So I won't cache like 15 minutes and then drop it all at once onto the databases. For anyone wondering I've applied the following workaround for the insane logging: Just now, maxstevens2 said: Hi, I've been on the same issue. An 'easy way' to fix this (temporarely, but without crazy always runnings scripts) is by adding something in the blacklist of rsyslogd. In my case I added the following: :msg,contains,"tun:" KEEP IN MIND THAT I ADDED, NOT REPLACED. There were 2 other lines in the file for me: :msg,contains,"Error: Nothing to do" stop :msg,contains,"user \"logout\" was not found in \"/etc/nginx/htpasswd\", client" stop Hope someone can use this info as workaround. I really couldn't change anything like said before, it would ruin the dockers functions due to local network scans. Very happy with the solution right now. Quote Link to comment
jay010101 Posted December 14, 2021 Share Posted December 14, 2021 Anyone see this happening again? I looks like plex is continuously writing to the cache. Quote Link to comment
maxstevens2 Posted December 14, 2021 Share Posted December 14, 2021 2 minutes ago, jay010101 said: Anyone see this happening again? I looks like plex is continuously writing to the cache. I've not seen plex do it. What are your other dockers? Or do you have a VM stored onto the drive? Whats te filesystem type? Quote Link to comment
jay010101 Posted December 15, 2021 Share Posted December 15, 2021 11 hours ago, maxstevens2 said: I've not seen plex do it. What are your other dockers? Or do you have a VM stored onto the drive? Whats te filesystem type? Only have 3 dockers: plex, unifi and binhex-krusader. The reason I think its plex is that if I stop the container the writes go to something small0-100KB/s. with Plex running i'm getting spikes of 500 KB/s and in iotop seeing [loops2] writes on the top of the list. Similar problem as in the past. I believe plex was the issue last time, or its docker in general. maybe i should up grade to the latest version . I'm on the stable branch right now 6.9.2 Quote Link to comment
tjb_altf4 Posted December 15, 2021 Share Posted December 15, 2021 19 minutes ago, jay010101 said: Only have 3 dockers: plex, unifi and binhex-krusader. The reason I think its plex is that if I stop the container the writes go to something small0-100KB/s. with Plex running i'm getting spikes of 500 KB/s and in iotop seeing [loops2] writes on the top of the list. Similar problem as in the past. I believe plex was the issue last time, or its docker in general. maybe i should up grade to the latest version . I'm on the stable branch right now 6.9.2 There were a few issues that amplified writes, one of those was dockers like Plex doing health checks (which created small changes frequently in the docker.img) The fix was to add the following to extra parameters of the offending docker under Edit Container > Advanced View --no-healthcheck Quote Link to comment
jay010101 Posted December 15, 2021 Share Posted December 15, 2021 That's awesome! I'll give it a try. I didn't notice this until a recent plex update. Thought it was fixed. Quote Link to comment
azche24 Posted March 5, 2022 Share Posted March 5, 2022 I did not go through this whole thread, but you already checked this? Excessive writes to cache drive from docker applications seems to be a bit of a design problem in the docker/UnRAID combination. Many many docker apps write logs and temp-files to the docker image / cache drive, which should not happen. For my installation i had to change plex official with binhex-plex, add a few extra-paths for temp-files and map these to /tmp/ and install a RAM-drive like explained by @mgutt here. It still does excessive writes. In my case these come from roon-docker, which has tons of logs in docker.img and writes tmp files there too. There should be a "design guideline" for dockers with UnRAID, because this is really annoying. From the UnRAID point of view the problem is related to the design or implementation of the docker apps. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.