Jump to content
Sign in to follow this  
jeffreywhunter

Unknown App Spamming Log

57 posts in this topic Last Reply

Recommended Posts

Recently, I've installed a few new Apps/Plugins (MySQL, Pydio, PlexPy, ProFTPd and Apache) and I'm seeing a repeating entry in my log.  

 

How do I determine which app/plugin is spamming the log?  The entry looks like this...

 

Feb 23 07:34:01 HunterNAS crond[9448]: chdir failed: root Console and webGui login account
Feb 23 07:34:01 HunterNAS crond[9449]: chdir failed: root Console and webGui login account
Feb 23 07:34:01 HunterNAS crond[9450]: chdir failed: root Console and webGui login account
Feb 23 07:35:01 HunterNAS crond[9896]: chdir failed: root Console and webGui login account
Feb 23 07:35:01 HunterNAS crond[9898]: chdir failed: root Console and webGui login account
Feb 23 07:35:01 HunterNAS crond[9897]: chdir failed: root Console and webGui login account
Feb 23 07:36:01 HunterNAS crond[10340]: chdir failed: root Console and webGui login account
Feb 23 07:36:01 HunterNAS crond[10342]: chdir failed: root Console and webGui login account
Feb 23 07:36:01 HunterNAS crond[10341]: chdir failed: root Console and webGui login account
Feb 23 07:37:01 HunterNAS crond[10796]: chdir failed: root Console and webGui login account
Feb 23 07:37:01 HunterNAS crond[10795]: chdir failed: root Console and webGui login account
Feb 23 07:37:01 HunterNAS crond[10794]: chdir failed: root Console and webGui login account
Feb 23 07:38:01 HunterNAS crond[11235]: chdir failed: root Console and webGui login account
Feb 23 07:38:01 HunterNAS crond[11236]: chdir failed: root Console and webGui login account
Feb 23 07:38:01 HunterNAS crond[11237]: chdir failed: root Console and webGui login account
Feb 23 07:39:01 HunterNAS crond[11677]: chdir failed: root Console and webGui login account
Feb 23 07:39:01 HunterNAS crond[11678]: chdir failed: root Console and webGui login account
Feb 23 07:39:01 HunterNAS crond[11679]: chdir failed: root Console and webGui login account

 

dmacias's  identified the problem helping me to get Apache working.  He said he's been told by It happens in the middle of installing his command line plugin (which I installed a long time ago) but he says there's no cron involved with that plugin.  

 

So is there a way to identify the source of these log entries?  Syslog attached.

 

Thanks in advance!

hunternas-diagnostics-20170223-0744.zip

Edited by jeffreywhunter

Share this post


Link to post

Or boot in SAFE mode to see if they stop. If not, then removing plugins isn't going to help.

Share this post


Link to post

Ok, so I booted in safe mode, then looked at the log.  The same issue is in the log, so its not Plugins...?

 

Feb 28 12:55:01 HunterNAS crond[13431]: chdir failed: root Console and webGui login account
Feb 28 12:55:01 HunterNAS crond[13432]: chdir failed: root Console and webGui login account
Feb 28 12:55:01 HunterNAS crond[13433]: chdir failed: root Console and webGui login account

Dockers are still running.  In looking at the server log again (tail), I see this repeating every 4 or 5 seconds

 

Feb 28 16:18:01 HunterNAS crond[6958]: chdir failed: root Console and webGui login account
Feb 28 16:18:01 HunterNAS crond[6960]: chdir failed: root Console and webGui login account
Feb 28 16:18:01 HunterNAS crond[6959]: chdir failed: root Console and webGui login account
Feb 28 16:18:01 HunterNAS crond[1660]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix.local.master/scripts/localmaster &> /dev/null
Feb 28 16:18:01 HunterNAS crond[1660]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix.system.stats/scripts/sa1 1 1 &> /dev/null

Appears I have a problem with Dynamix?  Dynamix and unRaid are the only plugins running...

 

After a day or so, my server crashes, i assume because the log fills...

 

Syslog attached...

 

Thoughts?

hunternas-diagnostics-20170228-1605.zip

Share this post


Link to post

So thought I'd take down the docker engine as well...  no change.  So now, no dockers running, no plugins except dynamix and unraid.  Log entries continue.  All help appreciated...

 

Feb 28 16:21:01 HunterNAS crond[8331]: chdir failed: root Console and webGui login account
Feb 28 16:21:01 HunterNAS crond[8332]: chdir failed: root Console and webGui login account
Feb 28 16:21:01 HunterNAS crond[8333]: chdir failed: root Console and webGui login account
Feb 28 16:21:01 HunterNAS crond[1660]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix.system.stats/scripts/sa1 1 1 &> /dev/null
Feb 28 16:21:07 HunterNAS crond[1660]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix.local.master/scripts/localmaster &> /dev/null
Feb 28 16:21:33 HunterNAS emhttp: cmd: /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin checkall
Feb 28 16:21:54 HunterNAS emhttp: shcmd (441): /etc/rc.d/rc.docker stop |& logger
Feb 28 16:21:55 HunterNAS root: stopping docker ...
Feb 28 16:21:55 HunterNAS root: 5b8cdcc507b3
Feb 28 16:21:56 HunterNAS root: waiting for docker to die...
Feb 28 16:21:57 HunterNAS avahi-daemon[2051]: Interface docker0.IPv4 no longer relevant for mDNS.
Feb 28 16:21:57 HunterNAS avahi-daemon[2051]: Leaving mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
Feb 28 16:21:57 HunterNAS avahi-daemon[2051]: Withdrawing address record for 172.17.0.1 on docker0.
Feb 28 16:21:57 HunterNAS emhttp: shcmd (442): umount /var/lib/docker |& logger
Feb 28 16:22:01 HunterNAS crond[9290]: chdir failed: root Console and webGui login account
Feb 28 16:22:01 HunterNAS crond[9291]: chdir failed: root Console and webGui login account
Feb 28 16:22:01 HunterNAS crond[9292]: chdir failed: root Console and webGui login account
Feb 28 16:22:01 HunterNAS crond[1660]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix.local.master/scripts/localmaster &> /dev/null
Feb 28 16:22:01 HunterNAS crond[1660]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix.system.stats/scripts/sa1 1 1 &> /dev/null
Feb 28 16:23:01 HunterNAS crond[9763]: chdir failed: root Console and webGui login account
Feb 28 16:23:01 HunterNAS crond[9764]: chdir failed: root Console and webGui login account
Feb 28 16:23:01 HunterNAS crond[9765]: chdir failed: root Console and webGui login account
Feb 28 16:23:01 HunterNAS crond[1660]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix.local.master/scripts/localmaster &> /dev/null
Feb 28 16:23:01 HunterNAS crond[1660]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix.system.stats/scripts/sa1 1 1 &> /dev/null
Feb 28 16:24:01 HunterNAS crond[10222]: chdir failed: root Console and webGui login account
Feb 28 16:24:01 HunterNAS crond[10223]: chdir failed: root Console and webGui login account
Feb 28 16:24:01 HunterNAS crond[10221]: chdir failed: root Console and webGui login account
Feb 28 16:24:01 HunterNAS crond[1660]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix.local.master/scripts/localmaster &> /dev/null

 

Share this post


Link to post
4 minutes ago, jeffreywhunter said:

Ok, so I booted in safe mode, then looked at the log.  The same issue is in the log, so its not Plugins...?

 

4 minutes ago, jeffreywhunter said:

Appears I have a problem with Dynamix?  Dynamix and unRaid are the only plugins running...

According to your log snippet, dynamix system stats and local master are both running.  According to your diagnostics, the system is NOT running in safe mode, since plugins were installed.

 

Your go file also has this in it

cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c

which is a remnant of unMenu, so any thing within the packages folder on the flash drive with the extension auto_install is going to install regardless of safe mode or not.

Share this post


Link to post
14 hours ago, Squid said:

According to your log snippet, dynamix system stats and local master are both running.  According to your diagnostics, the system is NOT running in safe mode, since plugins were installed.

 

 

I wondered that.  BUT, according to the WEBGUI, it IS in safe mode...?  Not sure if that is diagnostic or not, but I did reboot in safe mode AND disabled Docker to see if any of them were the problem.

 

20170301-ehyw-216kb.jpg

 

That said, I removed the Go file and rebooted.  That appears to have fixed the "Exit Status 127 error".  But I continue to get the "chdir failed" errors every 60 seconds.  Latest log attached.

 

I'm not sure if this is related or not, but I'm also seeing a problem with cache_dirs.  I had deleted some shares, which cache_dirs is still trying to go after.  I've posted a question about this on that forum topic - cache_dirs post here.

 

 

hunternas-diagnostics-20170222-1420.zip

Edited by jeffreywhunter

Share this post


Link to post

UPDATE:  Response from ROBJ said to change the cache_dirs config and save.  That fixed the cache_dirs problem.  But I still have this log spam filling my log...  Is there some other diagnostic I can do to isolate it?  I have:

 

1. Disabled Plugins, Dockers and booted in safe mode, the spam log entries continue.

2. Corrected cache_dirs and leftover unmenu entry in go file, solved those problems

 

But continue to see the chdir fail entries.

I have recently installed several dockers (MySQL, Pydia, Plexpy) and some plugins (Apache and ProFTPd).  But disabling didn't stop the crond errors.

 

I did a crontab -l, here's the output (just the default)

root@HunterNAS:/# crontab -l
====> (Note I removed comments)
# Run hourly cron jobs at 47 minutes after the hour:
47 * * * * /usr/bin/run-parts /etc/cron.hourly 1> /dev/null
#
# Run daily cron jobs at 4:40 every day:
40 4 * * * /usr/bin/run-parts /etc/cron.daily 1> /dev/null
#
# Run weekly cron jobs at 4:30 on the first day of the week:
30 4 * * 0 /usr/bin/run-parts /etc/cron.weekly 1> /dev/null
#
# Run monthly cron jobs at 4:20 on the first day of the month:
20 4 1 * * /usr/bin/run-parts /etc/cron.monthly 1> /dev/null
0 3 * * * /usr/local/emhttp/plugins/ca.backup/scripts/backup.php &>/dev/null 2>&1
root@HunterNAS:/#

Did a cat of cron.d/root

root@HunterNAS:/etc# cd cron.d
root@HunterNAS:/etc/cron.d# ls
root
root@HunterNAS:/etc/cron.d# cat root
# Generated cron settings for plugin autoupdates
0 0 * * * /usr/local/emhttp/plugins/ca.update.applications/scripts/updateApplications.php >/dev/null 2>&1
# Generated local master browser check:
*/1 * * * * /usr/local/emhttp/plugins/dynamix.local.master/scripts/localmaster &> /dev/null
# Generated system data collection schedule:
*/1 * * * * /usr/local/emhttp/plugins/dynamix.system.stats/scripts/sa1 1 1 &> /dev/null
# Generated docker monitoring schedule:
10 0 * * 1 /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/dockerupdate.php check &> /dev/null
# Generated system monitoring schedule:
*/1 * * * * /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null
# Generated mover schedule:
0 */3 * * * /usr/local/sbin/mover |& logger
# Generated parity check schedule:
0 23 * * 0 /usr/local/sbin/mdcmd check  &> /dev/null
# Generated plugins version check schedule:
10 0 * * 1 /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugincheck &> /dev/null
# Generated array status check schedule:
20 0 * * * /usr/local/emhttp/plugins/dynamix/scripts/statuscheck &> /dev/null
root@HunterNAS:/etc/cron.d#

Pretty vanilla...??!?

Share this post


Link to post
 
 
I wondered that.  BUT, according to the WEBGUI, it IS in safe mode...?  Not sure if that is diagnostic or not, but I did reboot in safe mode AND disabled Docker to see if any of them were the problem.
 
20170301-ehyw-216kb.jpg
 
That said, I removed the Go file and rebooted.  That appears to have fixed the "Exit Status 127 error".  But I continue to get the "chdir failed" errors every 60 seconds.  Latest log attached.
 
I'm not sure if this is related or not, but I'm also seeing a problem with cache_dirs.  I had deleted some shares, which cache_dirs is still trying to go after.  I've posted a question about this on that forum topic - cache_dirs post here.
 
 
hunternas-diagnostics-20170222-1420.zip

You can go to the Tools menu then to Processes and search for cache.dirs This will show you the command run for cache.dirs. It should list the -i (include) and -e (exclude) directories.

I would check out your dynamix plugin directory in the config/plugins folder on your flash drive. Look at the *.cron files and see if you can find the culprit for the log spam. Also you could run
cat /etc/cron.d/root

Share this post


Link to post
3 hours ago, jeffreywhunter said:

I wondered that.  BUT, according to the WEBGUI, it IS in safe mode...?  Not sure if that is diagnostic or not, but I did reboot in safe mode AND disabled Docker to see if any of them were the problem.

Regardless of what the message at the bottom of the screen says, plugins have installed.  The diagnostics, and the cron entries all agree with that.  I'll have to check this out tonight and see if there's a defect here.

Share this post


Link to post
2 hours ago, dmacias said:

I would check out your dynamix plugin directory in the config/plugins folder on your flash drive. Look at the *.cron files and see if you can find the culprit for the log spam. Also you could run


cat /etc/cron.d/root
 

I did run the cat /etc/cron.d/root - displayed in my previous post.

 

I found out the problem with the cache_dir.  Simple, just needed to change something in the settings and apply.  Works perfectly now.

 

Regarding the subject of this topic (repeating/spam log entries), I've not been able to figure that out yet.  I searched on all the .cron files in the flash plugins directory.  I've attached an image of their contents.  Didn't see anything obvious...to a newbie...  But maybe you were referring to the cache_dir issue, rather than the repeating chdir failed errors in my syslog.

 

all cron files.jpg

Edited by jeffreywhunter

Share this post


Link to post
UPDATE:  Response from ROBJ said to change the cache_dirs config and save.  That fixed the cache_dirs problem.  But I still have this log spam filling my log...  Is there some other diagnostic I can do to isolate it?  I have:
 
1. Disabled Plugins, Dockers and booted in safe mode, the spam log entries continue.
2. Corrected cache_dirs and leftover unmenu entry in go file, solved those problems
 
But continue to see the chdir fail entries.
I have recently installed several dockers (MySQL, Pydia, Plexpy) and some plugins (Apache and ProFTPd).  But disabling didn't stop the crond errors.
 
I did a crontab -l, here's the output (just the default)
root@HunterNAS:/# crontab -l====> (Note I removed comments)# Run hourly cron jobs at 47 minutes after the hour:47 * * * * /usr/bin/run-parts /etc/cron.hourly 1> /dev/null## Run daily cron jobs at 4:40 every day:40 4 * * * /usr/bin/run-parts /etc/cron.daily 1> /dev/null## Run weekly cron jobs at 4:30 on the first day of the week:30 4 * * 0 /usr/bin/run-parts /etc/cron.weekly 1> /dev/null## Run monthly cron jobs at 4:20 on the first day of the month:20 4 1 * * /usr/bin/run-parts /etc/cron.monthly 1> /dev/null0 3 * * * /usr/local/emhttp/plugins/ca.backup/scripts/backup.php &>/dev/null 2>&1root@HunterNAS:/#

Did a cat of cron.d/root

root@HunterNAS:/etc# cd cron.droot@HunterNAS:/etc/cron.d# lsrootroot@HunterNAS:/etc/cron.d# cat root# Generated cron settings for plugin autoupdates0 0 * * * /usr/local/emhttp/plugins/ca.update.applications/scripts/updateApplications.php >/dev/null 2>&1# Generated local master browser check:*/1 * * * * /usr/local/emhttp/plugins/dynamix.local.master/scripts/localmaster &> /dev/null# Generated system data collection schedule:*/1 * * * * /usr/local/emhttp/plugins/dynamix.system.stats/scripts/sa1 1 1 &> /dev/null# Generated docker monitoring schedule:10 0 * * 1 /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/dockerupdate.php check &> /dev/null# Generated system monitoring schedule:*/1 * * * * /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null# Generated mover schedule:0 */3 * * * /usr/local/sbin/mover |& logger# Generated parity check schedule:0 23 * * 0 /usr/local/sbin/mdcmd check  &> /dev/null# Generated plugins version check schedule:10 0 * * 1 /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugincheck &> /dev/null# Generated array status check schedule:20 0 * * * /usr/local/emhttp/plugins/dynamix/scripts/statuscheck &> /dev/nullroot@HunterNAS:/etc/cron.d#

Pretty vanilla...??!?


Try checking the cron log.
cat /var/log/cron

The entries running every minute other than the system monitor are the master browser and system stats plugins.

Share this post


Link to post

Try checking the cron log.
cat /var/log/cron

The entries running every minute other than the system monitor are the master browser and system stats plugins.

I filed a defect report where in safe mode update_cron still runs and is pretty much guaranteed to spam the logs

Sent from my LG-D852 using Tapatalk

Share this post


Link to post
1 hour ago, dmacias said:


Try checking the cron log.
cat /var/log/cron

The entries running every minute other than the system monitor are the master browser and system stats plugins.

 

Cat /var/log/cron returned nothing.

 

root@HunterNAS:/# cd /var/log
root@HunterNAS:/var/log# ls
apcupsd.events  debug       faillog  libvirt/  nfsd/      preclear.disk.log  samba/    setup/   tor/
btmp            dmesg       httpd/   maillog   packages/  removed_packages/  scripts/  spooler  wtmp
cron            docker.log  lastlog  messages  plugins/   removed_scripts/   secure    syslog
root@HunterNAS:/var/log# cat cron
root@HunterNAS:/var/log#

 

Share this post


Link to post
1 hour ago, Squid said:


I filed a defect report where in safe mode update_cron still runs and is pretty much guaranteed to spam the logs

Sent from my LG-D852 using Tapatalk
 

 

So this means that the system itself is spamming the log?

Share this post


Link to post

No.  It just means that in safe mode, the plugin system is still adding in the cron entries for the various plugins, but since the plugins themselves aren't installed, the syslog is going to indicate errors everytime that particular cron entry runs.  Explains why your /etc/cron.d/root file had the entries in safe mode, and why in safe mode you were still seeing the syslog spam from system stats (which runs every minute)

Share this post


Link to post
1 hour ago, Squid said:

No.  It just means that in safe mode, the plugin system is still adding in the cron entries for the various plugins, but since the plugins themselves aren't installed, the syslog is going to indicate errors everytime that particular cron entry runs.  Explains why your /etc/cron.d/root file had the entries in safe mode, and why in safe mode you were still seeing the syslog spam from system stats (which runs every minute)

 

That makes sense as an explanation from seeing those entries in safe mode.  But does it also explain why I'm seeing those same entries while not in safe mode?  Is there something I can turn off now to stop the log entries (I'm having to reboot the server almost every day because of the log filling...)...

 

By the way, if there is anything anyone wants to look at or test on my system, happy to be a guinea pig...  I can provide remote access through Teamviewer if that would help...

 

Latest syslog attached (full of spam entries)...

hunternas-diagnostics-20170302-0949.zip

Edited by jeffreywhunter

Share this post


Link to post
19 minutes ago, jeffreywhunter said:

Is there something I can turn off now to stop the log entries (I'm having to reboot the server almost every day because of the log filling...)...

Until its figured out, the solution is to uninstall the plugin in question (system stats) and reboot.

Share this post


Link to post
1 hour ago, Squid said:

Until its figured out, the solution is to uninstall the plugin in question (system stats) and reboot.

 

Uninstalled Dynamix System Stats...  still seeing the crond spams...  Syslog attached...

 

Obviously its a different plugin?  If you look at the last couple of emhttp log entries you'll see emhttp: nothing to sync, then unbalance loads, then the crond errors start.  Are these entries proximity to the start of the crond errors diagnostic?  Could this problem be connected to the WebGUI and some font issue?

emhttp: err: sendFile: sendfile /usr/local/emhttp/webGui/styles/font-awesome.css: Broken pipe

 

Mar 2 11:30:48 HunterNAS emhttp: Starting services...
Mar 2 11:30:48 HunterNAS emhttp: shcmd (100): set -o pipefail ; /usr/local/sbin/mount_image '/mnt/cache/docker.img' /var/lib/docker 20 |& logger
Mar 2 11:30:48 HunterNAS kernel: BTRFS: device fsid d50f0a96-f5f2-40d3-b029-fa978239ea10 devid 1 transid 15867 /dev/loop0
Mar 2 11:30:48 HunterNAS kernel: BTRFS info (device loop0): disk space caching is enabled
Mar 2 11:30:48 HunterNAS kernel: BTRFS info (device loop0): has skinny extents
Mar 2 11:30:48 HunterNAS root: Resize '/var/lib/docker' of 'max'
Mar 2 11:30:48 HunterNAS kernel: BTRFS info (device loop0): new size for /dev/loop0 is 21474836480
Mar 2 11:30:48 HunterNAS emhttp: shcmd (102): /etc/rc.d/rc.docker start |& logger
Mar 2 11:30:48 HunterNAS root: starting docker ...
Mar 2 11:30:50 HunterNAS kernel: ip_tables: (C) 2000-2006 Netfilter Core Team
Mar 2 11:30:50 HunterNAS avahi-daemon[12666]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
Mar 2 11:30:50 HunterNAS avahi-daemon[12666]: New relevant interface docker0.IPv4 for mDNS.
Mar 2 11:30:50 HunterNAS avahi-daemon[12666]: Registering new address record for 172.17.0.1 on docker0.IPv4.
Mar 2 11:30:50 HunterNAS root: HunterNASPlexServer: started succesfully!
Mar 2 11:30:50 HunterNAS emhttp: shcmd (104): /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/dockerupdate.php |& logger
Mar 2 11:30:53 HunterNAS root: Updating templates... Updating info... Done.
Mar 2 11:30:53 HunterNAS emhttp: nothing to sync
Mar 2 11:30:53 HunterNAS sudo: root : TTY=unknown ; PWD=/ ; USER=nobody ; COMMAND=/bin/bash -c /usr/local/emhttp/plugins/unbalance/unbalance -port 6237
Mar 2 11:31:01 HunterNAS crond[16061]: chdir failed: root Console and webGui login account
Mar 2 11:31:01 HunterNAS crond[16060]: chdir failed: root Console and webGui login account
Mar 2 11:31:25 HunterNAS emhttp: err: sendFile: sendfile /usr/local/emhttp/webGui/styles/font-awesome.css: Broken pipe
Mar 2 11:32:01 HunterNAS crond[17716]: chdir failed: root Console and webGui login account
Mar 2 11:32:01 HunterNAS crond[17715]: chdir failed: root Console and webGui login account
Mar 2 11:33:01 HunterNAS crond[19301]: chdir failed: root Console and webGui login account
Mar 2 11:33:01 HunterNAS crond[19302]: chdir failed: root Console and webGui login account
Mar 2 11:34:01 HunterNAS crond[20707]: chdir failed: root Console and webGui login account
Mar 2 11:34:01 HunterNAS crond[20708]: chdir failed: root Console and webGui login account
Mar 2 11:35:01 HunterNAS crond[21526]: chdir failed: root Console and webGui login account
Mar 2 11:35:01 HunterNAS crond[21527]: chdir failed: root Console and webGui login account
Mar 2 11:36:01 HunterNAS crond[22682]: chdir failed: root Console and webGui login account
Mar 2 11:36:01 HunterNAS crond[22681]: chdir failed: root Console and webGui login account
Mar 2 11:37:01 HunterNAS crond[23141]: chdir failed: root Console and webGui login account
Mar 2 11:37:01 HunterNAS crond[23140]: chdir failed: root Console and webGui login account

 

hunternas-diagnostics-20170302-1134.zip

Edited by jeffreywhunter

Share this post


Link to post

Quick update, this just appeared in my log...

 

Mar 2 11:43:01 HunterNAS crond[27103]: chdir failed: root Console and webGui login account
Mar 2 11:43:03 HunterNAS inotifywait[13431]: Failed to watch /mnt/disk5; upper limit on inotify watches reached!
Mar 2 11:43:03 HunterNAS inotifywait[13431]: Please increase the amount of inotify watches allowed per user via `/proc/sys/fs/inotify/max_user_watches'.

 

I googled a bit and every solution I found is to increase the limit with: sudo sysctl fs.inotify.max_user_watches=<some random high number> - but unraid does not have sudo...

And I was unable to find any information of the consequences of raising that value. I guess the default kernel value was set for a reason but it seems to be inadequate for particular usages. So here are my questions:

1. Is it safe to raise that value and what would be the consequences of a too high value?  How would I do that in unraid?
2. Is there a way to find out what are the currently set watches and which process set them to be able to determine if the reached limit is not caused by a faulty software?

Is this related to this crond issue (grasping straws) or is this an entirely different issue and deserves a separate topic?!?

Share this post


Link to post
28 minutes ago, jeffreywhunter said:

1. Is it safe to raise that value and what would be the consequences of a too high value?  How would I do that in unraid?

To see the current number of maximum watches:

cat /proc/sys/fs/inotify/max_user_watches

To change it:

echo SomeNumber > /proc/sys/fs/inotify/max_user_watches

Increasing the number of max watches has no effect on memory useage.  Every watch utilized however does use a small amount of RAM

 

This is probably from either Dynamix File integrity or Recycle Bin plugins

Edited by Squid

Share this post


Link to post
9 hours ago, Squid said:

To see the current number of maximum watches:


cat /proc/sys/fs/inotify/max_user_watches

 

Currently at 524288.  What should I increase it to?

Share this post


Link to post
Just now, jeffreywhunter said:

Currently at 524288.  What should I increase it to?

It all depends on your files.  Mine's at 720000 with no problems.  IIRC each file and each folder take a watch.

 

The change isn't persistent though.  You will either have to include that echo command in the "go" file (/boot/config/go) or as a user script starting at array start

Share this post


Link to post

Wellll.... I changed the max_user_watches to 720000 and rebooted.  Now my log is filling with 

 

Mar 3 01:53:40 HunterNAS root: error: plugins/advanced.buttons/AdvancedButtons.php: wrong csrf_token

in addition to the crond error...
 

Could the max_user_watches cause that?  Diagnostics attached...  ugh, this is beginning to feel like something is amiss...

hunternas-diagnostics-20170303-0156.zip

Edited by jeffreywhunter

Share this post


Link to post

I"m not sure what's happening to my system.

 

I uninstalled the Advanced buttons plugin, but it didn't uninstall - just hung with the message at the bottom of the GUI stating that is was uninstalling.

I waited 10 min, nothing changed.  So I rebooted and now I'm seeing wrong csrf_token with Fix Common Problems...  AND advanced buttons is still loading... (See screenshot attached).

 

I'm shutting down the server for the time being.  Somethin's not right and I'm afraid I'm going to really mess something up...  Help me Obiwan! ;)

Mar 3 02:19:15 HunterNAS root: HunterNASPlexServer: started succesfully!
Mar 3 02:19:15 HunterNAS emhttp: shcmd (104): /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/dockerupdate.php |& logger
Mar 3 02:19:18 HunterNAS root: Updating templates... Updating info... Done.
Mar 3 02:19:18 HunterNAS emhttp: nothing to sync
Mar 3 02:19:18 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:18 HunterNAS root: error: webGui/include/ProcessStatus.php: wrong csrf_token
Mar 3 02:19:18 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:18 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:18 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:18 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:18 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:18 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:20 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:21 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:22 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:23 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:24 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:25 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:26 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:27 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:28 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:29 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:30 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:31 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:32 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:33 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:34 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:35 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:36 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:37 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:38 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:39 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:40 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:41 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:42 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:43 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:44 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:45 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:46 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:47 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:48 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:49 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:50 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:51 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:52 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:53 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:54 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:55 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:56 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:57 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:58 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:19:59 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:00 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:01 HunterNAS crond[17169]: chdir failed: root Console and webGui login account
Mar 3 02:20:01 HunterNAS crond[17168]: chdir failed: root Console and webGui login account
Mar 3 02:20:01 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:02 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:03 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:04 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:05 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:06 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:07 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:08 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:09 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:10 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:11 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:12 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:13 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:14 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:15 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:16 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:17 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:18 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:19 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token
Mar 3 02:20:20 HunterNAS root: error: plugins/fix.common.problems/include/fixExec.php: wrong csrf_token

 

hunternas-diagnostics-20170303-0226.zip

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this