OrangeLoon Posted November 3, 2021 Share Posted November 3, 2021 (edited) I have been getting out of memory errors daily for several months. Once in a while they disappear for a day or so then come back. I turned off all dockers and have no VM's running. I usually have only two dockers running: ZoneMinder and Plex - both turn off since yesterday. Any next step suggestions would be appreciated. Unraid Ver: Version: 6.9.2 prettyboy-diagnostics-20211103-0548.zip Edited November 13, 2021 by OrangeLoon Solved Quote Link to comment
trurl Posted November 3, 2021 Share Posted November 3, 2021 Looks like zoneminder getting OOMed 1 hour ago, OrangeLoon said: only two dockers running then why do you have 40G docker.img? 20G is often more than enough. I have 18 dockers using only about half of 20G. Quote Link to comment
Frank1940 Posted November 3, 2021 Share Posted November 3, 2021 You do h ave an issue with Plex: Oct 26 02:07:36 PRETTYBOY kernel: Plex Media Scan[18630]: segfault at 10 ip 00001521ef5787ca sp 00001521eaa11898 error 4 in ld-musl-x86_64.so.1[1521ef568000+53000] Oct 26 02:07:36 PRETTYBOY kernel: Code: 14 66 85 d2 74 02 f4 9b 8b 57 f8 81 fa ff ff 00 00 7f 02 f4 9b 83 e1 1f 89 d0 c1 e0 04 48 98 48 29 c7 48 8b 47 f0 48 83 c7 f0 <48> 39 78 10 74 02 f4 9b 8b 70 20 83 e6 1f 39 ce 73 02 f4 9b 8b 78 This has occurred many times in your syslog. About "out of memory" errors. Sometimes these are caused by plugins of Docker application that have been setup so that their data files are not being written to a physical device. Remember that Unraid is installed on RAM disk and data files can be stored on that RAM disk but that practice increases the risk that the system will run out of memory! (Plus, they are lost every time the system reboots!) Usually, this problem occurs because the user has misconfigured the path for a data file. Quote Link to comment
OrangeLoon Posted November 3, 2021 Author Share Posted November 3, 2021 Thanks for the replies. Regarding the 0G docker.img - I originally had Home Assistant running also when the memory errors started. One older posting suggested a larger docker.img. Enlarged it - same error. Moved Home Assistant off and didn't set the size back. I can see where zm is running out of memory e.b.: Out of memory: Killed process 7453 (zmc) but that occurred on the Oct 29th and I stopped all dockers on the morning of Nov 2nd and still had the memory error. When doing some problem analysis in the past I stopped zm for 24 hours and the memory error then occurred on Plex. So I stopped them both and still had the error. Hence the posting. Plex's error - yeah, not found a solution yet because the memory issue took priority - of course they could be related. I'll look at the plug-ins. Most likely I didn't configure something. Thanks again. I'll be back with an update in a day or so. GSN Quote Link to comment
Frank1940 Posted November 3, 2021 Share Posted November 3, 2021 About the Plex error, I would suggest that you post up in the Support thread for that Docker. Quote Link to comment
OrangeLoon Posted November 6, 2021 Author Share Posted November 6, 2021 Removed all dockers and still getting memory errors. Here is the log for the 2 days: Nov 4 16:01:11 PRETTYBOY sSMTP[8950]: Sent mail for [email protected] (221 2.0.0 Bye) uid=0 username=root outbytes=612 Nov 4 16:04:46 PRETTYBOY sSMTP[10142]: Sent mail for [email protected] (221 2.0.0 Bye) uid=0 username=root outbytes=696 Nov 5 00:00:27 PRETTYBOY root: /var/lib/docker: 12.8 GiB (13726203904 bytes) trimmed on /dev/loop2 Nov 5 00:00:27 PRETTYBOY root: /mnt/cache: 38.3 GiB (41159491584 bytes) trimmed on /dev/sdb1 Nov 5 00:10:07 PRETTYBOY sSMTP[31193]: Sent mail for [email protected] (221 2.0.0 Bye) uid=0 username=root outbytes=733 Nov 5 00:10:09 PRETTYBOY sSMTP[31233]: Sent mail for [email protected] (221 2.0.0 Bye) uid=0 username=root outbytes=725 Nov 5 00:20:03 PRETTYBOY sSMTP[1310]: Sent mail for [email protected] (221 2.0.0 Bye) uid=0 username=root outbytes=1480 Nov 5 03:00:01 PRETTYBOY Plugin Auto Update: Checking for available plugin updates Nov 5 03:00:04 PRETTYBOY Plugin Auto Update: community.applications.plg version 2021.11.04 does not meet age requirements to update Nov 5 03:00:04 PRETTYBOY Plugin Auto Update: unassigned.devices.plg version 2021.11.04 does not meet age requirements to update Nov 5 03:00:04 PRETTYBOY Plugin Auto Update: Checking for language updates Nov 5 03:00:05 PRETTYBOY Plugin Auto Update: Community Applications Plugin Auto Update finished Nov 5 03:40:16 PRETTYBOY crond[1694]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Nov 5 04:15:01 PRETTYBOY Docker Auto Update: Community Applications Docker Autoupdate running Nov 5 04:15:01 PRETTYBOY Docker Auto Update: Checking for available updates Nov 5 04:15:01 PRETTYBOY Docker Auto Update: No updates will be installed Nov 5 04:40:01 PRETTYBOY root: Fix Common Problems Version 2021.08.05 Nov 5 04:40:02 PRETTYBOY root: Fix Common Problems: Warning: unRaids built in FTP server is running Nov 5 04:40:07 PRETTYBOY root: Fix Common Problems: Error: Out Of Memory errors detected on your server Nov 5 04:40:08 PRETTYBOY root: Fix Common Problems: Warning: Write Cache is disabled on disk6 ** Ignored Nov 5 04:40:08 PRETTYBOY root: Fix Common Problems: Warning: Write Cache is disabled on disk7 ** Ignored Nov 5 04:40:11 PRETTYBOY sSMTP[4247]: Sent mail for [email protected] (221 2.0.0 Bye) uid=0 username=root outbytes=888 Nov 5 21:50:53 PRETTYBOY webGUI: Successful login user root from 192.168.3.33 Nov 6 00:00:22 PRETTYBOY root: /var/lib/docker: 9.8 GiB (10512777216 bytes) trimmed on /dev/loop2 Nov 6 00:00:22 PRETTYBOY root: /mnt/cache: 52.5 GiB (56403001344 bytes) trimmed on /dev/sdb1 Nov 6 00:10:07 PRETTYBOY sSMTP[12375]: Sent mail for [email protected] (221 2.0.0 Bye) uid=0 username=root outbytes=733 Nov 6 00:20:03 PRETTYBOY sSMTP[14826]: Sent mail for [email protected] (221 2.0.0 Bye) uid=0 username=root outbytes=1480 Nov 6 03:00:01 PRETTYBOY Plugin Auto Update: Checking for available plugin updates Nov 6 03:00:04 PRETTYBOY Plugin Auto Update: community.applications.plg version 2021.11.05 does not meet age requirements to update Nov 6 03:00:04 PRETTYBOY Plugin Auto Update: unassigned.devices.plg version 2021.11.04 does not meet age requirements to update Nov 6 03:00:04 PRETTYBOY Plugin Auto Update: Checking for language updates Nov 6 03:00:04 PRETTYBOY Plugin Auto Update: Community Applications Plugin Auto Update finished Nov 6 03:40:16 PRETTYBOY crond[1694]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Nov 6 04:15:01 PRETTYBOY Docker Auto Update: Community Applications Docker Autoupdate running Nov 6 04:15:01 PRETTYBOY Docker Auto Update: Checking for available updates Nov 6 04:15:01 PRETTYBOY Docker Auto Update: No updates will be installed Nov 6 04:40:01 PRETTYBOY root: Fix Common Problems Version 2021.08.05 Nov 6 04:40:02 PRETTYBOY root: Fix Common Problems: Warning: unRaids built in FTP server is running Nov 6 04:40:07 PRETTYBOY root: Fix Common Problems: Error: Out Of Memory errors detected on your server Nov 6 04:40:08 PRETTYBOY root: Fix Common Problems: Warning: Write Cache is disabled on disk6 ** Ignored Nov 6 04:40:08 PRETTYBOY root: Fix Common Problems: Warning: Write Cache is disabled on disk7 ** Ignored Nov 6 04:40:12 PRETTYBOY sSMTP[14984]: Sent mail for [email protected] (221 2.0.0 Bye) uid=0 username=root outbytes=888 Really the only things left running is NUT and I use the system as backup storage. The backup process is this: the Home Assistant, running on a separate computer, performs backup at 1:00AM and copies the backup to the Unraid system (about 107,000 KB) in a non-cached share, keeping only the latest 3 versions. That process takes about a minute. There is a windows 10 system (192.168.3.33), which is always mapped to my media share (non-cached), and every night at 2:00 AM, the windows system copies the newly written backup to a local drive - so a backup of a backup. Overkill but it makes me happy. I believe that's it. disabling the HA backup for today, and if the memory error persist, then the windows access, then I go find more to disable. Quote Link to comment
OrangeLoon Posted November 11, 2021 Author Share Posted November 11, 2021 Update: I have removed all dockers, disabled Mover, and still have the memory error. No idea why. Quote Link to comment
Frank1940 Posted November 11, 2021 Share Posted November 11, 2021 I had another look at your Diagnostics file this morning. I found that you have Eight Shares that are not using any physical drives! They begin with the following letters-- B,D,T,R,V,X, and Z. Included in the eight is the 'isos' share which is not using a physical drive. Which share have you assigned Home assistant to use? Look at the folder called 'Shares' in your Diagnostics to see what I see when I am talking about shares. There may be a *good* reason why you have shares without a thing being stored on a physical disk, but it does look suspicious to the casual observer... Quote Link to comment
JonathanM Posted November 11, 2021 Share Posted November 11, 2021 If it's not on a physical drive, it's in RAM, which probably explains the out of memory errors. Quote Link to comment
OrangeLoon Posted November 11, 2021 Author Share Posted November 11, 2021 Interesting. I have 8 physical drives (parity + 7 data). I had a few shares listed in the shares tab and removed the ones that I know are no longer used, leaving me with appdata, Media, Backups, and system shares. Reran diagnostics and it shows a total of 13. No freakin' idea why or how to get rid of them or how they got there. Quote Link to comment
JonathanM Posted November 11, 2021 Share Posted November 11, 2021 User shares are automatically created for every root folder on every array disk or pool. Quote Link to comment
Frank1940 Posted November 11, 2021 Share Posted November 11, 2021 (edited) Open the GUI Terminal and run the following commands: ls -al /mnt You should get something like this: Now run this command: ls -al /mnt/user You should now get something like this: You can either use the Snip-&-Sketch tool or left-click and sweep across the Terminal Windows which will copy the text to the clipboard. (If you do the second method use the toolbar in the reply box to output it as 'Code'.) Edited November 11, 2021 by Frank1940 Hit Save too soon Quote Link to comment
Frank1940 Posted November 11, 2021 Share Posted November 11, 2021 32 minutes ago, JonathanM said: User shares are automatically created for every root folder on every array disk or pool. I think a Linux process (or a linux user/person) can create a directory in either /mnt/ or /mnt/user which would create a share that exists only in RAM. (I am not going to try it on any of my servers to find out... 😉 ) Quote Link to comment
trurl Posted November 11, 2021 Share Posted November 11, 2021 1 hour ago, Frank1940 said: Eight Shares that are not using any physical drives! They begin with the following letters-- B,D,T,R,V,X, and Z Diagnostics includes .cfg files for shares that no longer exist, simply because those .cfg files are on flash. It is very common to see these .cfg files that correspond to no actual shares, and have no files, since they don't exist on any disk. Just because there are share .cfg files doesn't mean there are shares or any corresponding folder in /mnt/user. I have seen users with over 100 share .cfg files and it is very tedious to try to go through them all to see if they do correspond to actual shares. So I often ask for a screenshot of the User Shares page. And you will also see share .cfg files in diagnostics that correspond to actual shares, with files, but don't correspond to any share .cfg file on flash. These are shares that the user created by specifying a path in /mnt but haven't actually made any settings for in the webUI, so they have default settings, and that corresponding .cfg in diagnostics says so. Also, I think if there were a folder created at /mnt that didn't correspond to any mounted storage, it would be in rootfs, which is not all of RAM. Quote Link to comment
OrangeLoon Posted November 11, 2021 Author Share Posted November 11, 2021 root@PRETTYBOY:~# ls -al /mnt total 16 drwxr-xr-x 14 root root 280 Oct 21 21:44 ./ drwxr-xr-x 20 root root 460 Nov 11 08:58 ../ drwxrwxrwx 1 nobody users 44 Nov 11 08:55 cache/ drwxrwxrwx 5 nobody users 63 Nov 11 04:40 disk1/ drwxrwxrwx 4 nobody users 45 Nov 11 08:57 disk2/ drwxrwxrwx 3 nobody users 19 Nov 11 04:40 disk3/ drwxrwxrwx 3 nobody users 19 Nov 11 04:40 disk4/ drwxrwxrwx 4 nobody users 45 Nov 11 04:40 disk5/ drwxrwxrwx 3 nobody users 19 Nov 11 04:40 disk6/ drwxrwxrwx 3 nobody users 19 Nov 11 04:40 disk7/ drwxrwxrwt 2 nobody users 40 Oct 21 21:40 disks/ drwxrwxrwt 2 nobody users 40 Oct 21 21:40 remotes/ drwxrwxrwx 1 nobody users 63 Nov 11 08:57 user/ drwxrwxrwx 1 nobody users 63 Nov 11 08:57 user0/ root@PRETTYBOY:~# root@PRETTYBOY:~# ls -al /mnt/user total 0 drwxrwxrwx 1 nobody users 63 Nov 11 08:57 ./ drwxr-xr-x 14 root root 280 Oct 21 21:44 ../ drwxrwxrwx 1 nobody users 18 Jun 10 2020 .Trash-99/ drwxrwxrwx 1 nobody users 6 Aug 30 22:52 Backup/ drwxrwxrwx 1 nobody users 62 Sep 1 11:10 Media/ drwxrwxrwx 1 nobody users 156 Nov 4 06:45 appdata/ drwxrwxrwx 1 nobody users 26 May 11 2021 system/ Per a request from Frank1940 Quote Link to comment
Frank1940 Posted November 11, 2021 Share Posted November 11, 2021 These two shares are there because of the Unassigned Devices plugin. I don't use the plugin so I am not real familiar with it but I seem to recall that can be problems if the physical disk(s) for these shares are removed and then you attempt to write to these disks. (I hope someone else who has more experience with the plugin will jump in at this point...) Quote Link to comment
OrangeLoon Posted November 13, 2021 Author Share Posted November 13, 2021 I had used these when testing with an external drive. I removed the two Unassigned Devices plugs the memory issues have not reoccur over the last two nights. I'm calling this one done. Thanks everyone for the help. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.