Jump to content

Log filling up 6.12.x tried posted fixes, no luck


Recommended Posts

Since upgrading to 6.12.x my log has been filling up:

 

This is what it will do:

 

log100.webp.e995069f34fa46ca71bfa26a61ed07e4.webp

 

This is where it is now after 21 hours uptime:

 

image.png.b9ea8908f8c7291327f013fe35de5063.png

 

I do not have any open browsers except the one tab in Microsoft Edge that is on a Windows 11 VM running on the unRAID server itself.  I do not even own any Android devices except ShieldTV's and I have made certain no other computer has an unRAID tab open.

 

My Dockers are using ipvlan and I have a Coffee Lake processor, not a Rocket Lake processor.

 

image.thumb.png.232324f63f98117cc5e86f5f7474f5e7.png

 

If I run:

 

 truncate -s 0 /var/log/syslog

 

Some of the log space is reclaimed, but not all.

 

image.png.9d6513adfadb6bb1396bae2091347663.png

 

If I look in "/var/log/" I can see logs growing, but truncating them does not free up anymore log memory, and the log graph will continue to increase.

 

root@unRAID:/var/log# ls -l
total 1692
-rw------- 1 root   root       0 Jun 29 11:59 btmp
-rw-r--r-- 1 root   root       0 Apr 28  2021 cron
-rw-r--r-- 1 root   root       0 Apr 28  2021 debug
-rw-r--r-- 1 root   root   92709 Jul  8 11:34 dmesg
-rw-r--r-- 1 root   root    3419 Jul  8 11:35 docker.log
-rw-r--r-- 1 root   root    7008 Jul  8 11:34 faillog
-rw-r--r-- 1 root   root       1 Jul  9 08:52 gitcount
-rw-r--r-- 1 root   root    3683 Jul  9 08:52 gitflash
-rw-r--r-- 1 root   root  104433 Jul  9 08:56 ipmifan
-rw-r--r-- 1 root   root   63948 Jul  8 11:34 lastlog
drwxr-xr-x 4 root   root     140 Jul  8 11:35 libvirt/
-rw-r--r-- 1 root   root    1191 Jul  9 00:20 maillog
-rw-r--r-- 1 root   root       0 Jul  8 11:34 mcelog
-rw-r--r-- 1 root   root       0 Apr 28  2021 messages
drwxr-xr-x 2 root   root      40 Aug 10  2022 nfsd/
drwxr-x--- 2 nobody root      60 Jul  8 11:34 nginx/
lrwxrwxrwx 1 root   root      24 Jun 29 11:58 packages -> ../lib/pkgtools/packages/
drwxr-xr-x 5 root   root     100 Jul  8 11:34 pkgtools/
drwxr-xr-x 2 root   root     460 Jul  8 11:34 plugins/
drwxr-xr-x 2 root   root      60 Jul  8 11:34 preclear/
drwxr-xr-x 2 root   root      40 Jul  8 11:37 pwfail/
lrwxrwxrwx 1 root   root      25 Jun 29 11:59 removed_packages -> pkgtools/removed_packages/
lrwxrwxrwx 1 root   root      24 Jun 29 11:59 removed_scripts -> pkgtools/removed_scripts/
lrwxrwxrwx 1 root   root      34 Jul  8 11:34 removed_uninstall_scripts -> pkgtools/removed_uninstall_scripts/
drwxr-xr-x 2 root   root      40 Jan 30 12:03 sa/
drwxr-xr-x 3 root   root     340 Jul  8 11:34 samba/
lrwxrwxrwx 1 root   root      23 Jun 29 11:58 scripts -> ../lib/pkgtools/scripts/
-rw-r--r-- 1 root   root       0 Apr 28  2021 secure
lrwxrwxrwx 1 root   root      21 Jun 29 11:58 setup -> ../lib/pkgtools/setup/
-rw-r--r-- 1 root   root       0 Apr 28  2021 spooler
drwxr-xr-x 3 root   root      60 Sep 27  2022 swtpm/
-rw-r--r-- 1 root   root       0 Jul  9 08:54 syslog
-rw-r--r-- 1 root   root 1488695 Jul  9 04:40 syslog.1
drwxr-xr-x 2 root   root      80 Jul  8 11:34 unraid-api/
-rw-r--r-- 1 root   root    5392 Jul  8 11:34 vfio-pci
-rw-r--r-- 1 root   root       0 Jul  8 11:34 vfio-pci-errors
-rw-rw-r-- 1 root   utmp    6912 Jul  8 11:34 wtmp
root@unRAID:/var/log# 

 

Deleting syslog.1 does nothing.  Truncating ipmifan does nothing to restore log free memory.

 

I am also having an issue with 6.12.x where every time I shutdown it is not clean and when my system starts back up a parity check begins.

 

I've exhausted what I know.  I will have to revert back to 6.11.5 if a solution cannot be reached quickly as I am going out of town for three weeks I my family will need to use the server for Plex and Kodi.

 

The only thing I changed was to delete my btrsf pool called "pool", and recreate it as a zfs pool called vssd.  My dockers and VM's are stored on that pool.  I went into all the Dockers and fixed their parameters to reflect the name change.

 

Thank you for any and all help.

unraid-diagnostics-20230709-0836.zip unraid-syslog-20230709-1339.zip

Link to comment

To make this post and get all the screenshots I wanted I had to shutdown Docker.  When I tried to restart Docker, it would not restart so I had to reboot the whole server.  I did save syslog.1 before I rebooted in case anyone needs to see it.  I don't want to post it because I am not sure if it contains any sensitive information.  Please PM me if you require a copy of syslog.1

 

Thanks again.

Edited by craigr
Information: The server started a parity check upon reboot. Docker was not running when the server was shutdown because it would not restart after I made the above post.
Link to comment

After reboot:

 

image.png.903ff35817655624fcfbfa57e6422220.png

 

Now the log will slowly start to grow.  If it helps, the log seems to grow exponentially if I go a long time without "looking" at the unRAID web GUI.  That said, I have tried shutting the unRAID web GUI tab and the behavior persists.

Link to comment
6 minutes ago, itimpi said:

You might want to try running

du -sh /var/log/*

in case the logs of concern are in a directory.

 

Here is what we have so far:

 

root@unRAID:~# du -sh /var/log/*
0       /var/log/btmp
0       /var/log/cron
0       /var/log/debug
92K     /var/log/dmesg
4.0K    /var/log/docker.log
4.0K    /var/log/faillog
4.0K    /var/log/gitcount
4.0K    /var/log/gitflash
4.0K    /var/log/ipmifan
4.0K    /var/log/lastlog
12K     /var/log/libvirt
4.0K    /var/log/maillog
0       /var/log/mcelog
0       /var/log/messages
0       /var/log/nfsd
0       /var/log/nginx
0       /var/log/packages
20K     /var/log/pkgtools
0       /var/log/plugins
0       /var/log/preclear
0       /var/log/pwfail
0       /var/log/removed_packages
0       /var/log/removed_scripts
0       /var/log/removed_uninstall_scripts
0       /var/log/sa
28K     /var/log/samba
0       /var/log/scripts
0       /var/log/secure
0       /var/log/setup
0       /var/log/spooler
0       /var/log/swtpm
240K    /var/log/syslog
44K     /var/log/unraid-api
8.0K    /var/log/vfio-pci
0       /var/log/vfio-pci-errors
8.0K    /var/log/wtmp

 

I'll check again after the log balloons.

 

Thank you.

Link to comment

This is the error that repeats over and over every one second once it gets started:

 

Jul  9 04:40:04 unRAID nginx: 2023/07/09 04:40:04 [crit] 9749#9749: ngx_slab_alloc() failed: no memory
Jul  9 04:40:04 unRAID nginx: 2023/07/09 04:40:04 [error] 9749#9749: shpool alloc failed
Jul  9 04:40:04 unRAID nginx: 2023/07/09 04:40:04 [error] 9749#9749: nchan: Out of shared memory while allocating message of size 25404. Increase nchan_max_reserved_memory.
Jul  9 04:40:04 unRAID nginx: 2023/07/09 04:40:04 [error] 9749#9749: *636496 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
Jul  9 04:40:04 unRAID nginx: 2023/07/09 04:40:04 [error] 9749#9749: MEMSTORE:00: can't create shared message for channel /devices
Jul  9 04:40:05 unRAID nginx: 2023/07/09 04:40:05 [crit] 9749#9749: ngx_slab_alloc() failed: no memory
Jul  9 04:40:05 unRAID nginx: 2023/07/09 04:40:05 [error] 9749#9749: shpool alloc failed
Jul  9 04:40:05 unRAID nginx: 2023/07/09 04:40:05 [error] 9749#9749: nchan: Out of shared memory while allocating message of size 25405. Increase nchan_max_reserved_memory.
Jul  9 04:40:05 unRAID nginx: 2023/07/09 04:40:05 [error] 9749#9749: *636505 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
Jul  9 04:40:05 unRAID nginx: 2023/07/09 04:40:05 [error] 9749#9749: MEMSTORE:00: can't create shared message for channel /devices
Jul  9 04:40:06 unRAID nginx: 2023/07/09 04:40:06 [crit] 9749#9749: ngx_slab_alloc() failed: no memory
Jul  9 04:40:06 unRAID nginx: 2023/07/09 04:40:06 [error] 9749#9749: shpool alloc failed
Jul  9 04:40:06 unRAID nginx: 2023/07/09 04:40:06 [error] 9749#9749: nchan: Out of shared memory while allocating message of size 25405. Increase nchan_max_reserved_memory.
Jul  9 04:40:06 unRAID nginx: 2023/07/09 04:40:06 [error] 9749#9749: *636516 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
Jul  9 04:40:06 unRAID nginx: 2023/07/09 04:40:06 [error] 9749#9749: MEMSTORE:00: can't create shared message for channel /devices
Jul  9 04:40:07 unRAID nginx: 2023/07/09 04:40:07 [crit] 9749#9749: ngx_slab_alloc() failed: no memory
Jul  9 04:40:07 unRAID nginx: 2023/07/09 04:40:07 [error] 9749#9749: shpool alloc failed
Jul  9 04:40:07 unRAID nginx: 2023/07/09 04:40:07 [error] 9749#9749: nchan: Out of shared memory while allocating message of size 25400. Increase nchan_max_reserved_memory.
Jul  9 04:40:07 unRAID nginx: 2023/07/09 04:40:07 [error] 9749#9749: *636525 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
Jul  9 04:40:07 unRAID nginx: 2023/07/09 04:40:07 [error] 9749#9749: MEMSTORE:00: can't create shared message for channel /devices
Jul  9 04:40:08 unRAID nginx: 2023/07/09 04:40:08 [crit] 9749#9749: ngx_slab_alloc() failed: no memory
Jul  9 04:40:08 unRAID nginx: 2023/07/09 04:40:08 [error] 9749#9749: shpool alloc failed
Jul  9 04:40:08 unRAID nginx: 2023/07/09 04:40:08 [error] 9749#9749: nchan: Out of shared memory while allocating message of size 25400. Increase nchan_max_reserved_memory.
Jul  9 04:40:08 unRAID nginx: 2023/07/09 04:40:08 [error] 9749#9749: *636536 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
Jul  9 04:40:08 unRAID nginx: 2023/07/09 04:40:08 [error] 9749#9749: MEMSTORE:00: can't create shared message for channel /devices

 

Then it stops.  I don't know what makes it start or stop.

 

EDIT: System info (in case you have signatures turned off) SuperMicro X11SCZ-F • Intel® Xeon® E-2288G • Micron 128GB ECC

Edited by craigr
SuperMicro X11SCZ-F • Intel® Xeon® E-2288G • Micron 128GB ECC
Link to comment

There have been reports that this can happen if you have a device (e.g. android phone) left with a bowser window open to the server.   Also been mentioned in conjunction with Edge browser with option to sleep tabs activated.   I do not get this issue myself so not sure if either of those are legitimate causes but worth checking out.

  • Thanks 1
Link to comment
20 minutes ago, itimpi said:

There have been reports that this can happen if you have a device (e.g. android phone) left with a bowser window open to the server.   Also been mentioned in conjunction with Edge browser with option to sleep tabs activated.   I do not get this issue myself so not sure if either of those are legitimate causes but worth checking out.

 

OK, I have been suspicious of sleep tabs with this issue because the log balloons when I have not accessed the tab in many hours (typically overnight).  I did try completely closing the tab which I think would have done the trick, but I'll try turning off sleep.  I don't know how to that, but I have Google.

 

Regarding other devices, as I said in the first post, I have made sure no other devices have unRAID browser tabs open and I do not even own any Android devices except ShieldTV's that only use Kodi and Plex; no web browsers.

 

I am running OPNSense, I wonder if there could be something there.

 

All that aside, this didn't start until 6.12.x.  Other than changing the name and file format of my Docker and VM cache pool to zfs, nothing else that I can think of has changed.  And the hard shutdowns and parity checks upon reboot are also suspicious and frustrating.  It seems like the server is shutting down cleanly.  The VM's shut down, the apps stop, and everything seems fine; then parity check on reboot.

Edited by craigr
Link to comment
2 hours ago, ljm42 said:

 

I'm pretty sure this is the problem with my shutdowns.  The other day when I manually stopped Docker it would not start again.  I had forgotten the commands to stop Docker so tried to reboot.  I think every time I restart Docker is not shutting down.

Link to comment
[2023-07-09T21:49:32.124] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T21:49:37.133] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T21:49:42.125] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T21:49:47.131] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T21:49:52.128] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T21:50:00.002] [10228] ^[[91m[ERROR]^[[39m ^[[91m[minigraph]^[[39m NO PINGS RECEIVED IN 3 MINUTES, SOCKET MUST BE RECONNECTED
[2023-07-09T21:50:00.003] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status:  { status: 'PING_FAILURE', error: 'Ping Receive Exceeded Timeout' }
[2023-07-09T21:50:00.004] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[remote-access]^[[39m Clearing all active remote subscriptions, minigraph is no longer connected.
[2023-07-09T21:50:00.006] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg
[2023-07-09T21:50:00.006] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Reconnecting Mothership - PING_FAILURE / PRE_INIT - SetGraphQLConnectionStatus Event
[2023-07-09T21:50:00.007] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Subscribing to Events
[2023-07-09T21:50:00.009] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status:  { status: 'CONNECTING', error: null }
[2023-07-09T21:50:00.009] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg
[2023-07-09T21:50:00.009] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Connecting to wss://mothership.unraid.net/ws
[2023-07-09T21:50:00.581] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status:  { status: 'CONNECTED', error: null }
[2023-07-09T21:50:00.582] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg
[2023-07-09T21:50:00.582] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Connected to wss://mothership.unraid.net/ws
[2023-07-09T21:50:02.129] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T21:50:07.133] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T21:50:12.134] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T21:50:22.136] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T21:50:27.143] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T21:50:32.138] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T21:50:32.160] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for shares
[2023-07-09T21:50:37.156] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T21:50:42.137] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T21:50:52.138] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T21:50:57.144] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T21:51:02.140] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks

 

image.png.4c07106583f2e2386d26f086e2f4fa0a.png

 

Is this the My Server plugin or whatever it's called that now seems to be integrated into unRAID?  I've had loads of issues with it in the past.  

Link to comment
4 hours ago, ljm42 said:

 

If I stop Docker than stdout.log stops growing.  I can confirm Docker stopped with `/etc/rc.d/rc.docker status`.  I tried stopping each Docker, but the only way was to turn off Docker in unRAID entirely.

 

I am pretty lost at this point and have no idea where to go next.

 

Please help.

Link to comment

Restart Docker:

 

[2023-07-09T23:44:00.001] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started
[2023-07-09T23:44:43.395] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:44:48.401] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:44:53.398] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:44:58.409] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:45:00.003] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started
[2023-07-09T23:45:13.401] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:45:18.407] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:45:53.405] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:45:58.417] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:46:00.004] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started
[2023-07-09T23:46:23.408] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:46:28.420] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:47:00.001] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started
[2023-07-09T23:47:03.414] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:47:08.424] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:47:33.417] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:47:38.429] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:48:00.004] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started
[2023-07-09T23:48:43.423] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:48:48.428] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:49:00.004] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started
[2023-07-09T23:50:00.004] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started
[2023-07-09T23:50:13.435] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:50:18.446] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:50:53.438] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:50:58.448] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:51:00.004] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started
[2023-07-09T23:52:00.004] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started
[2023-07-09T23:52:28.350] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for var
[2023-07-09T23:52:31.111] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[docker]^[[39m Starting docker watch
[2023-07-09T23:52:31.111] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[docker]^[[39m Creating docker event emitter instance
[2023-07-09T23:52:32.323] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[docker]^[[39m Binding to docker events
[2023-07-09T23:52:32.832] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for var
[2023-07-09T23:52:33.266] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[docker]^[[39m [mariadb] container->start
[2023-07-09T23:52:33.450] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:52:33.814] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[docker]^[[39m [plexinc/pms-docker:latest] container->start
[2023-07-09T23:52:34.444] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[docker]^[[39m [binhex/arch-nzbget] container->start
[2023-07-09T23:52:35.021] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[docker]^[[39m [binhex/arch-delugevpn] container->start
[2023-07-09T23:52:35.361] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[docker]^[[39m [binhex/arch-delugevpn] container->die
[2023-07-09T23:52:43.450] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for shares
[2023-07-09T23:52:43.452] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:52:48.455] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:52:53.450] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:52:53.454] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for shares
[2023-07-09T23:52:58.453] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:53:00.003] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started
[2023-07-09T23:53:03.453] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:53:03.463] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for shares
[2023-07-09T23:53:08.462] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:53:13.454] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:53:23.454] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:53:28.463] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:53:33.456] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for shares
[2023-07-09T23:53:33.462] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:53:38.471] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:53:43.456] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for shares
[2023-07-09T23:53:43.458] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:53:48.460] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:53:53.459] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-09T23:53:58.473] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-09T23:54:00.003] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started
[2023-07-09T23:54:03.458] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks

 

And repeat forever.  Why would this happen intermittently as well?

 

Link to comment
11 hours ago, craigr said:

Is this the My Server plugin or whatever it's called that now seems to be integrated into unRAID?  I've had loads of issues with it in the past.  

 

The unraid-api/stdout.log grows to 10MB and then is rolled. In the next Connect plugin release we're going to reduce the size it can grow to, but even so there is no way that by itself it will fill the 128MB log partition. But why don't you go ahead and uninstall the Unraid Connect plugin for now, just to rule it out

 

TBH if there are any other plugins you can live without, it may help to simplify the system by removing as many as you can.

 

You run Plex right? Go into your Plex Docker container settings, switch to advanced view, and add this to the Extra Params

--no-healthcheck

 

  • Thanks 1
Link to comment
3 hours ago, ljm42 said:

 

The unraid-api/stdout.log grows to 10MB and then is rolled. In the next Connect plugin release we're going to reduce the size it can grow to, but even so there is no way that by itself it will fill the 128MB log partition. But why don't you go ahead and uninstall the Unraid Connect plugin for now, just to rule it out

 

TBH if there are any other plugins you can live without, it may help to simplify the system by removing as many as you can.

 

You run Plex right? Go into your Plex Docker container settings, switch to advanced view, and add this to the Extra Params

--no-healthcheck

 

 

 

Darn, I thought I found it.

 

Will do on Plex.

 

So far no ballooning log for 14 hours.

Link to comment
17 hours ago, ljm42 said:

 

The unraid-api/stdout.log grows to 10MB and then is rolled. In the next Connect plugin release we're going to reduce the size it can grow to, but even so there is no way that by itself it will fill the 128MB log partition. But why don't you go ahead and uninstall the Unraid Connect plugin for now, just to rule it out

 

TBH if there are any other plugins you can live without, it may help to simplify the system by removing as many as you can.

 

You run Plex right? Go into your Plex Docker container settings, switch to advanced view, and add this to the Extra Params

--no-healthcheck

 

 

Follow up.  So far the log has not ballooned since removing the Unraid Connect plugin and adding --no-healthcheck to the Plex docker... OK why.  I'm not prepared to say it's fixed.  Removing the Unraid Connect plugin was obvious.  However, you have me scratching my head on where you came up with removing helathcheck from Plex.  And that you suggested only Plex.  Why just Plex, why not Deluge, why not all my Dockers.

 

So far it seems brilliant, but should I remove healthcheck from all my Dockers. or Dockers that are always running and doing things?

 

I suppose it could have been the  Unraid Connect plugin too.  I won't have time to fully investigate until I get back from my travels in a few weeks. 

 

Thanks!

Edited by craigr
Add into
Link to comment
On 7/10/2023 at 10:47 AM, ljm42 said:

 

The unraid-api/stdout.log grows to 10MB and then is rolled. In the next Connect plugin release we're going to reduce the size it can grow to, but even so there is no way that by itself it will fill the 128MB log partition. But why don't you go ahead and uninstall the Unraid Connect plugin for now, just to rule it out

 

TBH if there are any other plugins you can live without, it may help to simplify the system by removing as many as you can.

 

You run Plex right? Go into your Plex Docker container settings, switch to advanced view, and add this to the Extra Params

--no-healthcheck

 

 

Also, I didn't know healthcheck was enabled by default in unRAID Docker containers.  I thought one had to go through a whole rigamarole to get it turned on.  Now I know it's on by default.  Turning it off for Plex has reduced writes to my SSD pool tremendously which was something that always bugged me even though it only amounts to about 75GB a year; just the constant writing was annoying.  At one point I even moved the data path to a spinner so that it wouldn't write to SSD all the time.  When Plex became a full time necessary Docker a few months ago I didn't want to risk it being on a single spinner so I moved it back to the RAID1 SSD pool.

 

Thanks again for that suggestions, but I do wonder if I should turn off healthcheck for things like Deluge and Radar as well?  As I understand it, healthcheck is really just a Band-Aid and covers underlying problems in most cases.  If the server is running correctly than it should not be needed.

Link to comment
On 7/10/2023 at 10:47 AM, ljm42 said:

 

The unraid-api/stdout.log grows to 10MB and then is rolled. In the next Connect plugin release we're going to reduce the size it can grow to, but even so there is no way that by itself it will fill the 128MB log partition. But why don't you go ahead and uninstall the Unraid Connect plugin for now, just to rule it out

 

TBH if there are any other plugins you can live without, it may help to simplify the system by removing as many as you can.

 

You run Plex right? Go into your Plex Docker container settings, switch to advanced view, and add this to the Extra Params

--no-healthcheck

 

 

More, I am seeing this in stdout.log.  Seems to be after disabling healthcheck on Plex.  What does it mean, do I need to worry about it?  Also, since you said that log is limited to 10MB I am not going to concern myself with it as a culprit anymore, it's a relief.

 

[2023-07-10T04:58:00.002] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started
[2023-07-10T04:58:01.333] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-10T04:58:06.328] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for shares
[2023-07-10T04:58:06.331] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-10T04:58:08.490] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status:  { status: 'CONNECTING', error: null }
[2023-07-10T04:58:08.490] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg
[2023-07-10T04:58:08.490] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Connecting to wss://mothership.unraid.net/ws
[2023-07-10T04:58:08.855] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status:  { status: 'CONNECTED', error: null }
[2023-07-10T04:58:08.856] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg
[2023-07-10T04:58:08.856] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Connected to wss://mothership.unraid.net/ws
[2023-07-10T04:58:08.980] [10228] ^[[91m[ERROR]^[[39m ^[[91m[minigraph]^[[39m Network Error Encountered "error" message expects the 'payload' property to b>
[2023-07-10T04:58:08.980] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status:  {
  status: 'ERROR_RETRYING',
  error: `"error" message expects the 'payload' property to be an array of GraphQL errors, but got "Could not find a user with that information. Please try>
}
[2023-07-10T04:58:08.980] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[remote-access]^[[39m Clearing all active remote subscriptions, minigraph is no longer connecte>
[2023-07-10T04:58:08.980] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg
[2023-07-10T04:58:08.981] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Delay currently is 29237.169127408637
[2023-07-10T04:58:11.335] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-10T04:58:16.328] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-10T04:58:21.331] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-10T04:58:26.328] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-10T04:58:31.331] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
[2023-07-10T04:58:36.328] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-10T04:58:38.219] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status:  { status: 'CONNECTING', error: null }
[2023-07-10T04:58:38.219] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg
[2023-07-10T04:58:38.219] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Connecting to wss://mothership.unraid.net/ws
[2023-07-10T04:58:38.576] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status:  { status: 'CONNECTED', error: null }
[2023-07-10T04:58:38.576] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg
[2023-07-10T04:58:38.576] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Connected to wss://mothership.unraid.net/ws
[2023-07-10T04:58:38.666] [10228] ^[[91m[ERROR]^[[39m ^[[91m[minigraph]^[[39m Network Error Encountered "error" message expects the 'payload' property to b>
[2023-07-10T04:58:38.666] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status:  {
  status: 'ERROR_RETRYING',
  error: `"error" message expects the 'payload' property to be an array of GraphQL errors, but got "Could not find a user with that information. Please try>
}
[2023-07-10T04:58:38.666] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[remote-access]^[[39m Clearing all active remote subscriptions, minigraph is no longer connecte>
[2023-07-10T04:58:38.666] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg
[2023-07-10T04:58:38.667] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Delay currently is 253101.86594354652
[2023-07-10T04:58:46.328] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for shares
[2023-07-10T04:58:46.330] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks
[2023-07-10T04:58:51.334] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event

 

Link to comment
1 hour ago, craigr said:

I didn't know healthcheck was enabled by default in unRAID Docker containers. 

 

It is actually up to the container, depends on whether it has code to run healthchecks.

 

I don't know if the other containers you mentioned have code to run healthchecks, don't think I have seen anyone suggest disabling it on them. But it shouldn't hurt to disable it.

 

I don't yet know if everyone should disable healthchecks on Plex, but if they are having issues with containers not being able to restart I will suggest it.

For anyone else reading this, if you have the NVIDIA plugin installed be sure it is up to date. Plex healthchecks with older versions versions of the NVIDIA plugin will definitely cause issues.

 

52 minutes ago, craigr said:

I am seeing this in stdout.log.

 

Unraid-api will log activity to help with troubleshooting, the logging shouldn't be a problem (just to repeat for anyone else who stumbles on this - the unraid-api currently logs up to 10MB before rolling the log, in the future that limit will be reduced. Alone it is not enough to fill the 128MB log partition on Unraid)

 

Based on that log snippet I'd suggest signing out of Unraid Connect and then signing back in. If you have further questions about the actual contents of the log, please go to Settings -> Management Access -> Unraid Connect and see the section about Unraid-api logs. They could have confidential information so I'd suggest not posting them publicly.

  • Like 1
Link to comment
54 minutes ago, ljm42 said:

 

It is actually up to the container, depends on whether it has code to run healthchecks.

 

I don't know if the other containers you mentioned have code to run healthchecks, don't think I have seen anyone suggest disabling it on them. But it shouldn't hurt to disable it.

 

I don't yet know if everyone should disable healthchecks on Plex, but if they are having issues with containers not being able to restart I will suggest it.

For anyone else reading this, if you have the NVIDIA plugin installed be sure it is up to date. Plex healthchecks with older versions versions of the NVIDIA plugin will definitely cause issues.

 

 

Unraid-api will log activity to help with troubleshooting, the logging shouldn't be a problem (just to repeat for anyone else who stumbles on this - the unraid-api currently logs up to 10MB before rolling the log, in the future that limit will be reduced. Alone it is not enough to fill the 128MB log partition on Unraid)

 

Based on that log snippet I'd suggest signing out of Unraid Connect and then signing back in. If you have further questions about the actual contents of the log, please go to Settings -> Management Access -> Unraid Connect and see the section about Unraid-api logs. They could have confidential information so I'd suggest not posting them publicly.

 

I logged out of and then uninstalled Unraid Connect.  I cannot sign back in.  This can't be affecting the log, can it?  Should I reinstall the plugin?  I frankly don't really like it much, have many issues with it, and don't really trust the unencrypted backups.  It's certainly not an app I feel I need at all.

Link to comment
4 minutes ago, craigr said:

I logged out of and then uninstalled Unraid Connect.  I cannot sign back in.  This can't be affecting the log, can it?  Should I reinstall the plugin?

 

Sorry, I'm confused. If you uninstall the Unraid Connect plugin then the sign in box will be removed from the upper right corner of the page and you won't be able to sign in to Unraid Connect from within the webgui. Uninstalling the Unraid Connect plugin will have no effect on logging in to the webgui as root.

 

If that doesn't answer your question, please show a screen shot to help me understand the issue

Link to comment
4 minutes ago, ljm42 said:

 

Sorry, I'm confused. If you uninstall the Unraid Connect plugin then the sign in box will be removed from the upper right corner of the page and you won't be able to sign in to Unraid Connect from within the webgui. Uninstalling the Unraid Connect plugin will have no effect on logging in to the webgui as root.

 

If that doesn't answer your question, please show a screen shot to help me understand the issue

 

image.png.d77e817192a7772315a1a3ed41181e12.png

 

Yes, I cannot sign in because the app is gone and uninstalled.  I meant should I reinstall the app and sign in, then sign out again?

 

I thought you were implying that my log entries had to do with the Unraid Connect app which is totally uninstalled?  I am now also confused 🤪

Edited by craigr
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...