kentromox Posted March 8, 2017 Share Posted March 8, 2017 Hi After the update from 6.3.1 to 6.3.2 Ive been getting "Too many open files" and "Bad Descriptor" error messages on the log. It makes Plex unable to play the content and everyting unusable. My temporary fix is to restart the server, but comes back after 12 hours or more. Could someone please help me understand why this is happening? ravanor-diagnostics-20170308-2231.zip Quote Link to comment
trurl Posted March 9, 2017 Share Posted March 9, 2017 Disable the Docker Service and reboot. Set your appdata share to cache-prefer, then run mover to get your appdata back onto cache. Then enable Docker Service. Not sure if any of that will help, but it will at least make your configuration more correct and less confusing to diagnose. Quote Link to comment
kentromox Posted March 10, 2017 Author Share Posted March 10, 2017 (edited) Ill see what I can manage in the weekend. Update, now the error has apparead after moving appdata to Cache. Here is the new diagnostic. ravanor-diagnostics-20170317-1717.zip Edited March 17, 2017 by kentromox Update on Issue Quote Link to comment
citrius Posted April 6, 2017 Share Posted April 6, 2017 Im getting the same issue now. Does anyone have a solution for this? I have unraid 6.3.3 Quote Link to comment
kentromox Posted April 16, 2017 Author Share Posted April 16, 2017 Still have the issue, tried going back to set the Cache prefer setting, didnt work. Quote Link to comment
mudboy Posted April 16, 2017 Share Posted April 16, 2017 (edited) I'm getting this too. This is from just after reboot... If it gets stuck again I'll try to post one from that state. tower-diagnostics-20170416-0839.zip Edited April 16, 2017 by mudboy added diag Quote Link to comment
Squid Posted April 16, 2017 Share Posted April 16, 2017 (edited) I'm curious what the output is from the following command for those having that error ulimit -a Edited April 16, 2017 by Squid Quote Link to comment
kentromox Posted April 16, 2017 Author Share Posted April 16, 2017 This is my output on this: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 128178 max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 40960 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 128178 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Quote Link to comment
Squid Posted April 16, 2017 Share Posted April 16, 2017 The simple fix to at least mask the problem would be ulimit -n 70000 (or some other number suitably high) Quote Link to comment
kentromox Posted April 16, 2017 Author Share Posted April 16, 2017 But will it stay like that permanent? I've tried that but keeps reverting back to the old setting after reboot, Quote Link to comment
Squid Posted April 16, 2017 Share Posted April 16, 2017 But will it stay like that permanent? I've tried that but keeps reverting back to the old setting after reboot,Did it fix it though?And no it's not permanent. You can for the time being install the user scripts plugin and have it run the command at array startSent from my LG-D852 using Tapatalk Quote Link to comment
kentromox Posted April 16, 2017 Author Share Posted April 16, 2017 (edited) 23 minutes ago, Squid said: Did it fix it though? And no it's not permanent. You can for the time being install the user scripts plugin and have it run the command at array start Sent from my LG-D852 using Tapatalk Gonna try and replicate the problem and see if it works. Might take some time. Edit: Had mixed results, sometimes it works, sometimes it just hits the limit again. Edited April 16, 2017 by kentromox Quote Link to comment
Squid Posted April 17, 2017 Share Posted April 17, 2017 Then you've got to narrow it down by stopping docker apps until the problem disappears... Then you know what's out of control... Or keep increasing the number. But something is running amok and increasing the number to obscene values is only masking the problem... But, I did ask @dlandon to incorporate max files into the tips and tweaks plugin, as I can see with ever increasing complexity of apps, and number of apps running that more users will begin to need an increase from the 'nix defaults Quote Link to comment
kentromox Posted April 17, 2017 Author Share Posted April 17, 2017 (edited) Okey, I've managed to narrow it down to Headphones and Jackett. It only starts when those are on. Then it start affecting everyone else. I also get a bad file descriptor error messages. Also tried to up the limit from 70000 to 80000, then just added more zeroes to the end. The problem still persist. Apr 17 13:36:33 Ravanor shfs/user: err: shfs_create: open: /mnt/disk6/appdata/sickrage/sickbeard.db-journal (24) Too many open files Apr 17 13:36:34 Ravanor shfs/user: err: shfs_create: open: /mnt/disk6/appdata/sickrage/sickbeard.db-journal (24) Too many open files Apr 17 13:36:35 Ravanor shfs/user: err: shfs_open: open: /mnt/cache/appdata/sickrage/cache.db (24) Too many open files Apr 17 13:36:35 Ravanor shfs/user: err: shfs_open: open: /mnt/cache/appdata/sickrage/cache.db (24) Too many open files Apr 17 13:36:35 Ravanor shfs/user: err: shfs_flush: close: (9) Bad file descriptor Apr 17 13:36:38 Ravanor shfs/user: err: shfs_create: open: /mnt/disk6/appdata/sickrage/sickbeard.db-journal (24) Too many open files Apr 17 13:36:39 Ravanor shfs/user: err: shfs_open: open: /mnt/disk6/appdata/deluge_alex/state/torrents.state.tmp (24) Too many open files Apr 17 13:36:39 Ravanor shfs/user: err: shfs_create: open: /mnt/disk6/appdata/sickrage/sickbeard.db-journal (24) Too many open files Apr 17 13:36:40 Ravanor shfs/user: err: shfs_flush: close: (9) Bad file descriptor Apr 17 13:36:40 Ravanor shfs/user: err: shfs_create: open: /mnt/disk6/appdata/sickrage/sickbeard.db-journal (24) Too many open files Apr 17 13:36:41 Ravanor shfs/user: err: shfs_create: open: /mnt/disk6/appdata/sickrage/sickbeard.db-journal (24) Too many open files Apr 17 13:36:42 Ravanor shfs/user: err: shfs_create: open: /mnt/disk6/appdata/sickrage/sickbeard.db-journal (24) Too many open files Apr 17 13:36:43 Ravanor shfs/user: err: shfs_open: open: /mnt/cache/appdata/sickrage/cache.db (24) Too many open files Apr 17 13:36:43 Ravanor shfs/user: err: shfs_open: open: /mnt/cache/appdata/sickrage/cache.db (24) Too many open files Apr 17 13:36:43 Ravanor shfs/user: err: shfs_flush: close: (9) Bad file descriptor Apr 17 13:36:45 Ravanor shfs/user: err: shfs_create: open: /mnt/disk6/appdata/sickrage/sickbeard.db-journal (24) Too many open files Apr 17 13:36:46 Ravanor shfs/user: err: shfs_create: open: /mnt/disk6/appdata/sickrage/sickbeard.db-journal (24) Too many open files Apr 17 13:36:47 Ravanor shfs/user: err: shfs_create: open: /mnt/disk6/appdata/sickrage/sickbeard.db-journal (24) Too many open files Apr 17 13:36:48 Ravanor shfs/user: err: shfs_create: open: /mnt/disk6/appdata/sickrage/sickbeard.db-journal (24) Too many open files Apr 17 13:36:49 Ravanor shfs/user: err: shfs_readdir: opendir: /mnt/disk2/Torrent/Syncthing/Dokumenter (24) Too many open files Apr 17 13:36:49 Ravanor shfs/user: err: shfs_readdir: opendir: /mnt/disk4/Torrent/Syncthing/Dokumenter (24) Too many open files Apr 17 13:36:49 Ravanor shfs/user: err: shfs_flush: close: (9) Bad file descriptor Apr 17 13:36:49 Ravanor shfs/user: err: shfs_create: open: /mnt/disk6/appdata/sickrage/sickbeard.db-journal (24) Too many open files Apr 17 13:36:49 Ravanor shfs/user: err: shfs_open: open: /mnt/disk6/appdata/radarr/nzbdrone.db-wal (24) Too many open files Apr 17 13:36:49 Ravanor shfs/user: err: shfs_open: open: /mnt/disk6/appdata/radarr/nzbdrone.db-wal (24) Too many open files Apr 17 13:36:49 Ravanor shfs/user: err: shfs_open: open: /mnt/disk8/appdata/radarr/logs/radarr.txt (24) Too many open files Apr 17 13:36:49 Ravanor shfs/user: err: shfs_open: open: /mnt/disk8/appdata/radarr/logs/radarr.txt (24) Too many open files Apr 17 13:36:50 Ravanor shfs/user: err: shfs_open: open: /mnt/cache/appdata/sickrage/cache.db (24) Too many open files This is some of the error that comes up. ravanor-diagnostics-20170417-1346.zip Edited April 17, 2017 by kentromox Added Diagnostic zip Quote Link to comment
trurl Posted April 17, 2017 Share Posted April 17, 2017 The diagnostics syslogs don't go back far enough to capture the start of the problem, but I suspect you have OOMed the user shares similar to what we saw here: Post the docker run command for the suspect apps. Quote Link to comment
kentromox Posted April 17, 2017 Author Share Posted April 17, 2017 Headphones: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="jackett" --net="bridge" -e TZ="Europe/Berlin" -e HOST_OS="unRAID" -e "PUID"="99" -e "PGID"="100" -p 9117:9117/tcp -v "/mnt/user/appdata/jackett":"/config":rw -v "/mnt/user/Torrent/blackhole":"/downloads":rw linuxserver/jackett f0da71ab9b3aa2398abfcaaba166b616b733985329b75b3ee9167a58e3850573 The command finished successfully! Jackett: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="Headphones" --net="bridge" -e TZ="Europe/Berlin" -e HOST_OS="unRAID" -e "PUID"="99" -e "PGID"="100" -p 8181:8181/tcp -v "/mnt/user/appdata/headphones":"/config":rw -v "/mnt/user/Media/Ill Music/":"/music":rw -v "/mnt/user/Torrent/data/":"/data":rw linuxserver/headphones 02366f2f72e41757789a2335a9f83f6646ecbbf40574fc335734bb9f2ba93d70 The command finished successfully! Quote Link to comment
mudboy Posted April 21, 2017 Share Posted April 21, 2017 anyone else have this problem that is running the ombi docker container? Quote Link to comment
kentromox Posted April 23, 2017 Author Share Posted April 23, 2017 Okey, found something interesting on the logs when the problem occured, dont know if this is the cause. Apr 22 08:17:01 Ravanor shfs/user: err: shfs_flush: close: (9) Bad file descriptor Apr 22 08:17:01 Ravanor shfs/user: err: shfs_flush: close: (9) Bad file descriptor Apr 22 08:17:01 Ravanor shfs/user: err: shfs_create: open: /mnt/disk7/appdata/ombi/Ombi.sqlite-journal (24) Too many open files Apr 22 08:17:01 Ravanor shfs/user: err: shfs_open: open: /mnt/cache/appdata/ombi/Ombi.sqlite (24) Too many open files Apr 22 08:17:01 Ravanor shfs/user: err: shfs_open: open: /mnt/cache/appdata/ombi/Ombi.sqlite (24) Too many open files Apr 22 08:17:01 Ravanor shfs/user: err: shfs_open: open: /mnt/cache/appdata/ombi/Ombi.sqlite (24) Too many open files Apr 22 08:17:01 Ravanor shfs/user: err: shfs_open: open: /mnt/cache/appdata/ombi/Ombi.sqlite (24) Too many open files Apr 22 08:17:01 Ravanor shfs/user: err: shfs_open: open: /mnt/cache/appdata/ombi/Ombi.sqlite (24) Too many open files Apr 22 08:17:01 Ravanor shfs/user: err: shfs_open: open: /mnt/cache/appdata/ombi/Ombi.sqlite (24) Too many open files Apr 22 08:17:01 Ravanor shfs/user: err: shfs_open: open: /mnt/cache/appdata/ombi/Ombi.sqlite (24) Too many open files Apr 22 08:17:01 Ravanor shfs/user: err: shfs_open: open: /mnt/cache/appdata/ombi/Ombi.sqlite (24) Too many open files Apr 22 08:17:01 Ravanor shfs/user: err: shfs_open: open: /mnt/cache/appdata/ombi/Ombi.sqlite (24) Too many open files Apr 22 08:17:01 Ravanor shfs/user: err: shfs_open: open: /mnt/cache/appdata/ombi/Ombi.sqlite (24) Too many open files Apr 22 08:17:01 Ravanor shfs/user: err: shfs_flush: close: (9) Bad file descriptor Apr 22 08:17:02 Ravanor shfs/user: err: shfs_readdir: opendir: /mnt/cache/. (24) Too many open files Apr 22 08:17:02 Ravanor shfs/user: err: shfs_readdir: opendir: /mnt/disk1/. (24) Too many open files Apr 22 08:17:02 Ravanor shfs/user: err: shfs_readdir: opendir: /mnt/disk2/. (24) Too many open files Apr 22 08:17:02 Ravanor shfs/user: err: shfs_readdir: opendir: /mnt/disk3/. (24) Too many open files Apr 22 08:17:02 Ravanor shfs/user: err: shfs_readdir: opendir: /mnt/disk4/. (24) Too many open files Apr 22 08:17:02 Ravanor shfs/user: err: shfs_readdir: opendir: /mnt/disk5/. (24) Too many open files Apr 22 08:17:02 Ravanor shfs/user: err: shfs_readdir: opendir: /mnt/disk6/. (24) Too many open files Apr 22 08:17:02 Ravanor shfs/user: err: shfs_readdir: opendir: /mnt/disk7/. (24) Too many open files Apr 22 08:17:02 Ravanor shfs/user: err: shfs_readdir: opendir: /mnt/disk8/. (24) Too many open files Apr 22 08:17:02 Ravanor shfs/user: err: shfs_flush: close: (9) Bad file descriptor The problem starts 08.17.01. Also, cant quite understand the OOM the user share, could you give me an explanation to how the problem comes up? Quote Link to comment
trurl Posted April 23, 2017 Share Posted April 23, 2017 1 hour ago, kentromox said: Also, cant quite understand the OOM the user share, could you give me an explanation to how the problem comes up? If linux runs Out Of Memory it will begin to kill processes. In V5 it wasn't uncommon for the webUI or SMB to get killed for this. Hasn't usually been much of a problem with V6 since it has 64bit addressing. Maybe not what happened in your case. The diagnostics you posted don't go back far enough to see the start of the problem so there isn't a log entry that actually said OOM. Looks like you may be onto the real problem. Don't know if the Open Files plugin would provide a clue or not. Quote Link to comment
kentromox Posted April 23, 2017 Author Share Posted April 23, 2017 (edited) 35 minutes ago, trurl said: If linux runs Out Of Memory it will begin to kill processes. In V5 it wasn't uncommon for the webUI or SMB to get killed for this. Hasn't usually been much of a problem with V6 since it has 64bit addressing. Maybe not what happened in your case. The diagnostics you posted don't go back far enough to see the start of the problem so there isn't a log entry that actually said OOM. Looks like you may be onto the real problem. Don't know if the Open Files plugin would provide a clue or not. Interesting, where can I find the "Open Files" plugin? This is all new to me. Cant see it in the Plugin tab on Webui. Edit: Might also be related to Ombi too, the problem starts at the same time for the last few days. Around 8 in the morning. Keep seeing Ombi cant open files cause of the "Too many files open" error. Second edit: Just installed Open Files plugin, why the hell does Ombi need so many file open?? Uploading a pic of the example im being presented, this goes for pages. Third edit: Killed the Ombi Docker, seems too be normal from the Open Files Plugin, gonna let it run for a couple off days to see if the problem still exst. Thanks for the help trurl. Gonna be interesting to see now. Edited April 23, 2017 by kentromox Third edit, Holy god why... Quote Link to comment
avluis Posted April 24, 2017 Share Posted April 24, 2017 (edited) This is a very interesting read as I just migrated (about 3 or so hours ago) my Ombi container from a Synology to unRAID; which is running, well, this will do the talking for me: Lo and behold, will you look at that -- I get my first OOM with unRAID ever. First noticed this while browsing GitLab -- my images weren't loading. Then it was WinSCP -- my NFS shares were going bye bye (this part is scary lol). I wasn't able to capture the moment when it triggers, but Ombi is the only new element in my system. I'm speculating (since this is what it was doing after installation) that the episode searching is what's causing the OOM issue (related: https://github.com/tidusjar/Ombi/issues/1256) It doesn't help that I already have a high amount of open files (looking at you SonarQube) so it didn't take much to tip it over the edge. This system stays fairly busy so I won't be able to provide any more info than this -- just wanted to confirm that Ombi is the common element. I'm hoping those guys can move away from Mono and onto .net Core -- I can see a good app, just not under Linux. Edited April 24, 2017 by avluis Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.