Hogwind Posted June 13, 2017 Share Posted June 13, 2017 Hi, I got a problem with the "flash log docker" filling up to 100% and makes unRaid loose connection to user shares and stuff. And right now it's about to happen again, it's 23% now, it's usually around 2-3%. I have stopped CouchPotato docker as it was the first reference in the logg when the system log starts to fill up. I have attached a diagnostics. I hops someone can lead me to find the problem and solve it. tower-diagnostics-20170613-1159.zip Link to comment
kizer Posted June 13, 2017 Share Posted June 13, 2017 Are you suffering from this? If so here is a way to reduce your logs in your Docker Containers Link to comment
trurl Posted June 13, 2017 Share Posted June 13, 2017 "flash log docker" are 3 separate things. "flash" refers to how much space is used on your flash drive, "log" refers to how much of unRAID's log space is used, and "docker" refers to how much of your docker img is used. And docker logs are in docker img, not in "log". Your diagnostics seems to indicate that "docker" was at 32% when they were taken. I think kizer has given you one clue. Take a look at some of the other posts in that FAQ for other ideas about why "docker" is filling up. Link to comment
Hogwind Posted June 14, 2017 Author Share Posted June 14, 2017 9 hours ago, kizer said: Are you suffering from this? If so here is a way to reduce your logs in your Docker Containers Yes I am suffering, unRaid stopps working and I have to restart the whole server. The gui doesn't respond after a while. I don't think it's my docker.img that get full, I think the log is getting full, because of some docker that gets bananas or something. Link to comment
Hogwind Posted June 14, 2017 Author Share Posted June 14, 2017 9 hours ago, trurl said: "flash log docker" are 3 separate things. "flash" refers to how much space is used on your flash drive, "log" refers to how much of unRAID's log space is used, and "docker" refers to how much of your docker img is used. And docker logs are in docker img, not in "log". Your diagnostics seems to indicate that "docker" was at 32% when they were taken. I think kizer has given you one clue. Take a look at some of the other posts in that FAQ for other ideas about why "docker" is filling up. Thanks for clearing up the "flash log docker" layout. I will try and look for a way of limit the docker log. Link to comment
trurl Posted June 14, 2017 Share Posted June 14, 2017 5 hours ago, Hogwind said: I don't think it's my docker.img that get full, I think the log is getting full, because of some docker that gets bananas or something. There is no reason to guess at this. You should know which is getting full. The numbers are in the order of the items. The first number is your flash, etc. If the last number says it is getting full, it is your docker img that is getting full. Link to comment
Hogwind Posted June 14, 2017 Author Share Posted June 14, 2017 31 minutes ago, trurl said: There is no reason to guess at this. You should know which is getting full. The numbers are in the order of the items. The first number is your flash, etc. If the last number says it is getting full, it is your docker img that is getting full. Okay, to be clear it's the LOG that get to 100% full. Now I've set an extra parameter to every docker to: --log-opt max-size=50m --log-opt max-file=1 I will know in a couple of days if it's working. Now it's 3% Link to comment
Squid Posted June 14, 2017 Share Posted June 14, 2017 If its the log entry, that won't help (but certainly won't hurt and should be there anyways)Once you see the entry get above 20% then post your diagnosticsSent from my LG-D852 using Tapatalk Link to comment
Hogwind Posted June 14, 2017 Author Share Posted June 14, 2017 21 minutes ago, Squid said: If its the log entry, that won't help (but certainly won't hurt and should be there anyways) Once you see the entry get above 20% then post your diagnostics Sent from my LG-D852 using Tapatalk Thanks, in my first post it's 23% with the diagnostic posted. Link to comment
Squid Posted June 14, 2017 Share Posted June 14, 2017 This is the start of the problem Jun 13 09:47:34 Tower shfs/user: err: shfs_open: open: /mnt/cache/appdata/couchpotato/data/database/media_status_stor (24) Too many open files Which repeats for various different files many times, and then the actual thing that's filling up the log over and over (and over again) is this Jun 13 09:49:06 Tower shfs/user: err: shfs_flush: close: (9) Bad file descriptor I would start killing docker apps one at a time (reboot after each) and then see what happens. Start with the pigs first. Things like CrashPlan if installed. Link to comment
MrCrispy Posted June 14, 2017 Share Posted June 14, 2017 i think the params to limit log file size should be default when creating a container, or at least can be added automatically in CA plugin? This seems to be a common issue. Link to comment
Hogwind Posted June 16, 2017 Author Share Posted June 16, 2017 On 6/14/2017 at 5:16 PM, Squid said: This is the start of the problem Jun 13 09:47:34 Tower shfs/user: err: shfs_open: open: /mnt/cache/appdata/couchpotato/data/database/media_status_stor (24) Too many open files Which repeats for various different files many times, and then the actual thing that's filling up the log over and over (and over again) is this Jun 13 09:49:06 Tower shfs/user: err: shfs_flush: close: (9) Bad file descriptor I would start killing docker apps one at a time (reboot after each) and then see what happens. Start with the pigs first. Things like CrashPlan if installed. The apps that I have installed is: CouchPotatoHeadphonesjackettKrusaderLibresonicMylarOmbiPlexPlexpyradarrruTorrentSonarrUbooquityWatcher Strikethrough = no running after reboot. Bold = running after reboot Don't have CrashPlan, do you have an updated pig-list I also have a Windows 7 VM, but that is turn off for now. I will run this setup for a couple of days and see if it's working out ok. Link to comment
gurulee Posted May 7, 2019 Share Posted May 7, 2019 I am now seeing this as well. The only workaround for me so far was a reboot of unraid. I have tried the extra parameter in adv docker settings, but it threw an error on my docker and I had to reinstall the docker. The middle gauge was 100% and all dockers and VM's were hung. Current uptime 1d 37h..... Link to comment
Squid Posted May 7, 2019 Share Posted May 7, 2019 1 hour ago, guruleenyc said: I have tried the extra parameter in adv docker settings, Won't affect this at all 1 hour ago, guruleenyc said: The middle gauge was 100% and all dockers and VM's were hung. Current uptime 1d 37h..... Post the diagnostics when it starts going up. Link to comment
gurulee Posted May 10, 2019 Share Posted May 10, 2019 On 5/7/2019 at 11:27 AM, Squid said: Won't affect this at all Post the diagnostics when it starts going up. So the problem has not yet returned and I have not changed anything other than rebooting the 2x it occured after approx. 24hr and starting back up a Win2012 VM. Uptime 4d 21h and log is: Link to comment
EDalcin Posted May 29, 2019 Share Posted May 29, 2019 On 6/13/2017 at 5:56 PM, trurl said: "flash log docker" are 3 separate things. I suggest that the interface express it in a better way because it's not clear at all, considering the number of questions at the forum about it. Maybe use "flash / log / docker" - "1% / 5% / 52%" Link to comment
itimpi Posted May 29, 2019 Share Posted May 29, 2019 1 hour ago, EDalcin said: I suggest that the interface express it in a better way because it's not clear at all, considering the number of questions at the forum about it. Maybe use "flash / log / docker" - "1% / 5% / 52%" Another thought is to change the order slightly to something like flash:1% log:5% docker:52% so it is clearer what each figure is associated with. Having said that I have just looked at the dashboard on a system running 6.7 rc1 (I do not have a 6.7 system to hand to check if it is the same) and maybe just a few extra words would help: flash -> flash device Log -> Log space docker -> docker image Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.