Still not enough inotify watches?


Recommended Posts

I've been dealing with a recurring error regarding inotify watches being exceeded.  

 

Jul 21 10:38:43 HunterNAS inotifywait[9065]: Failed to watch /mnt/disk5; upper limit on inotify watches reached!

 

So I read in the forum about changing them in the go file, which I did some time ago.  And no problems for a while, now I'm seeing them again on disk 5.

 

Current Go File

#!/bin/bash
# Start the Management Utility
/usr/local/sbin/emhttp &
# resize to 128mb filesize (from 256) 
tmpfs
mount -o remount,size=128m /var/log

# Increase max_user_watches to 720000 (was 524288)

echo 720000 > /proc/sys/fs/inotify/max_user_watches

cat /proc/sys/fs/inotify/max_user_watches returns 720000, so I know the command worked.

 

I tailed the log, and no "no space left on device" error displayed

root@HunterNAS:~# tail -f /var/log/dmesg
[   19.910230] sd 1:0:7:0: [sdm] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
[   19.910301] sd 1:0:7:0: Attached scsi generic sg13 type 0
[   19.910397] sd 1:0:7:0: [sdm] 4096-byte physical blocks
[   19.910519] sd 1:0:7:0: [sdm] Write Protect is off
[   19.910599] sd 1:0:7:0: [sdm] Mode Sense: 00 3a 00 00
[   19.910617] sd 1:0:7:0: [sdm] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[   19.927154]  sdl: sdl1
[   19.927586] sd 1:0:6:0: [sdl] Attached SCSI disk
[   19.986792]  sdm: sdm1
[   19.987355] sd 1:0:7:0: [sdm] Attached SCSI disk

 

Is there a way to determine what is consuming these watches?  Should I just increase them again?

 

Thanks in advanced Oh wizened ones...

 

 

Link to comment
18 minutes ago, jeffreywhunter said:

So I read in the forum about changing them in the go file, which I did some time ago.  And no problems for a while, now I'm seeing them again on disk 5.

 

Probably the recommended way of changing the watches is through the tips and tweaks plugin, but what you did is fine.

Link to comment

This is what I see...

root@HunterNAS:~# cd /boot
root@HunterNAS:/boot# cat inotify.txt
udevd      1252             root    5r  a_inode                0,9            0       2050 inotify
dbus-daem  1597       messagebus    5r  a_inode                0,9            0       2050 inotify
acpid      1655             root    8r  a_inode                0,9            0       2050 inotify
smbd-noti  1675             root   14r  a_inode                0,9            0       2050 inotify
agetty     7886             root    4r  a_inode                0,9            0       2050 inotify
agetty     7887             root    4r  a_inode                0,9            0       2050 inotify
agetty     7888             root    4r  a_inode                0,9            0       2050 inotify
agetty     7890             root    4r  a_inode                0,9            0       2050 inotify
agetty     7891             root    4r  a_inode                0,9            0       2050 inotify
agetty     7892             root    4r  a_inode                0,9            0       2050 inotify
avahi-dae  8019            avahi   13r  a_inode                0,9            0       2050 inotify
tail       9975             root    4r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734           nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10735     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10736     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10742     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10743     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10748     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10749     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10750     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10792     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10795     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10796     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10798     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10800     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10834     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10843     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 14195     nobody   92r  a_inode                0,9            0       2050 inotify
grep      15559             root    1w      REG                8,1            0      19966 /boot/inotify.txt

How do I decode this to determine why my 720,000 inotify watches are not large enough...?

 

Link to comment
  • 2 months later...
On 7/21/2017 at 5:15 PM, jeffreywhunter said:

This is what I see...


root@HunterNAS:~# cd /boot
root@HunterNAS:/boot# cat inotify.txt
udevd      1252             root    5r  a_inode                0,9            0       2050 inotify
dbus-daem  1597       messagebus    5r  a_inode                0,9            0       2050 inotify
acpid      1655             root    8r  a_inode                0,9            0       2050 inotify
smbd-noti  1675             root   14r  a_inode                0,9            0       2050 inotify
agetty     7886             root    4r  a_inode                0,9            0       2050 inotify
agetty     7887             root    4r  a_inode                0,9            0       2050 inotify
agetty     7888             root    4r  a_inode                0,9            0       2050 inotify
agetty     7890             root    4r  a_inode                0,9            0       2050 inotify
agetty     7891             root    4r  a_inode                0,9            0       2050 inotify
agetty     7892             root    4r  a_inode                0,9            0       2050 inotify
avahi-dae  8019            avahi   13r  a_inode                0,9            0       2050 inotify
tail       9975             root    4r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734           nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10735     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10736     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10742     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10743     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10748     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10749     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10750     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10792     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10795     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10796     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10798     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10800     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10834     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 10843     nobody   92r  a_inode                0,9            0       2050 inotify
Plex\x20M 10734 14195     nobody   92r  a_inode                0,9            0       2050 inotify
grep      15559             root    1w      REG                8,1            0      19966 /boot/inotify.txt

How do I decode this to determine why my 720,000 inotify watches are not large enough...?

 

 

Did you ever manage to figure this out? I'm getting the same warning and would like to identify the culprit...

 

:)

 

Link to comment
6 minutes ago, dlandon said:

It doesn't show have.  I would double it to see if that is enough.

The weird thing is I'm up to 4 000 000, and I'm still getting the message... That can't be right... I could double it again, as I have ram to spare, but I feel I'm just masking an issue. Everything else is running great, just getting that error message...

Link to comment

I have mine set to 1 million, so 4 million doesn't seem out of line.

 

It all depends on how many files are on your system and what dockers and plugins you are running.  For instance, if you have Plex set to watch for new files, that will take inotify watches. If CrashPlan is looking for files to backup, that will take inotify watches. If Dynamix File Integrity is looking for files that change, that will take inotify watches. If the File Activity plugin is watching for files that are opened, that will use inotify watches.  There are probably others :)

 

In short... the more files you have, and the more apps you have that watch them, the more inotify watches you need.

Edited by ljm42
Link to comment

I think my issue is resolved. I went into the Tips and Tweaks Plugin and doubled my iNotify watches from 4 million to 8 million, and re-ran the Fix Common Problem scan. It told me I was still running out of iNotify watches.

 

I then went to command line and (after googling) manually looked for the total number of iNotify watches using this command:

 

cat /proc/sys/fs/inotify/max_user_watches

Lo and behold, my inotify max was still set at the default 524288, even though Fix Common Problems said 4 000 000... I hit the default button in FCProblems, reset everything, and then upped the iNotify watches to 1 million. Sure enough, this time, the command line command showed 1 million watches, and FCProblems agreed... And best of all, no more warnings!

  • Like 1
Link to comment
  • 2 weeks later...

Great that you found a solution.  I'm still getting the error.  Currently just upped inotify from 2048000 to 3072000.  Does anyone know if there is a real-time command or a utility to show how many watches are in use?  I did the following and got this - not sure if there is a better way...

ls -l /proc/*/fd/* | grep notify
root@HunterNAS:~# ls -l /proc/*/fd/* | grep notify
/bin/ls: cannot access '/proc/26277/fd/255': No such file or directory
/bin/ls: cannot access '/proc/26277/fd/3': No such file or directory
/bin/ls: cannot access '/proc/self/fd/255': No such file or directory
/bin/ls: cannot access '/proc/self/fd/3': No such file or directory
/bin/ls: cannot access '/proc/thread-self/fd/255': No such file or directory
/bin/ls: cannot access '/proc/thread-self/fd/3': No such file or directory
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/1252/fd/5 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/1594/fd/5 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/1652/fd/8 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  2 11:39 /proc/1672/fd/14 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  6 06:04 /proc/25548/fd/4 -> anon_inode:inotify
lr-x------ 1 nobody users 64 Oct  6 06:04 /proc/25579/fd/93 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/8071/fd/4 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/8072/fd/4 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/8073/fd/4 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/8074/fd/4 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/8075/fd/4 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/8076/fd/4 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/8206/fd/13 -> anon_inode:inotify

The log appears to only complain about inotifywatches on Disk8

 

Syslog attached.  Thoughts?

hunternas-diagnostics-20171006-1132.zip

Edited by jeffreywhunter
Link to comment
1 hour ago, jeffreywhunter said:

Great that you found a solution.  I'm still getting the error.  Currently just upped inotify from 2048000 to 3072000.  Does anyone know if there is a real-time command or a utility to show how many watches are in use?  I did the following and got this - not sure if there is a better way...


ls -l /proc/*/fd/* | grep notify

root@HunterNAS:~# ls -l /proc/*/fd/* | grep notify
/bin/ls: cannot access '/proc/26277/fd/255': No such file or directory
/bin/ls: cannot access '/proc/26277/fd/3': No such file or directory
/bin/ls: cannot access '/proc/self/fd/255': No such file or directory
/bin/ls: cannot access '/proc/self/fd/3': No such file or directory
/bin/ls: cannot access '/proc/thread-self/fd/255': No such file or directory
/bin/ls: cannot access '/proc/thread-self/fd/3': No such file or directory
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/1252/fd/5 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/1594/fd/5 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/1652/fd/8 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  2 11:39 /proc/1672/fd/14 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  6 06:04 /proc/25548/fd/4 -> anon_inode:inotify
lr-x------ 1 nobody users 64 Oct  6 06:04 /proc/25579/fd/93 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/8071/fd/4 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/8072/fd/4 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/8073/fd/4 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/8074/fd/4 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/8075/fd/4 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/8076/fd/4 -> anon_inode:inotify
lr-x------ 1 root   root  64 Oct  1 18:31 /proc/8206/fd/13 -> anon_inode:inotify

The log appears to only complain about inotifywatches on Disk8

 

Syslog attached.  Thoughts?

hunternas-diagnostics-20171006-1132.zip

Install the Tips and Tweaks plugin.  You can see and modify the inotifywait watches.

Link to comment
  • 2 weeks later...
  • 3 years later...
  • 10 months later...

Bumping this up to the top... I have an ASP.NET Core docker container that I'm trying to run, and as part of its process it spawns file watchers on a long list of subdirectories (it's a photo management app - Damselfly - so it watches for new photos to be added).

 

I also have other dockers running for Radarr, Sonarr, Lidarr, Plex etc that also use file watchers to monitor for new media being added to their respective directories.

 

I'm positive that R/S/L/P et al have far more than 128 inotify watchers being set up, given the size of each of those libraries.  However, when I launch Damselfly, it'll set up watchers on a few of the subdirectories in my photo library, and eventually throw up a bunch of errors in its log like so:

 

Exception creating filewatcher for /pictures/2007/2007-06-28: The configured user limit (128) on the number of inotify instances has been reached, or the per-process limit on the number of open file descriptors has been reached.

 

Reading that, my first thought was to increase the inotify limit on Unraid. So I did that.. I've now got it up to 4,000,000, and can see that both through the console as well as the Tips and Tweaks plugin. The numbers match.

 

And yet, that docker container still complains about a 128 inotify max.

 

To make it even more confusing, I just tried installing another docker (also ASP.NET Core it appears) for Kavita - if Damselfly is running before I start the Kavita docker, Kavita hits the same exact error and refuses to start.  If I launch Kavita first, it runs without incident.

 

I have rebooted the whole server, I have 64 GB of RAM, so memory isn't an issue... what else can I do to force these docker containers to recognize the correct inotify watch limit?

Link to comment
  • 4 weeks later...
On 9/17/2021 at 11:02 AM, rswafford said:

And yet, that docker container still complains about a 128 inotify max.

I was reading this stackexchange answer https://unix.stackexchange.com/questions/444998/how-to-set-and-understand-fs-notify-max-user-watches and reviewed the source for the "tips and tricks" plugin, to see how it modifies fs.inotify.max_user_watches https://github.com/dlandon/tips.and.tweaks/blob/master/source/scripts/rc.tweaks#L59

 

	# Set the inotify max_user_watches.
	if [ "$MAX_WATCHES" = "" ]; then
		# Set the inotify max_user_watches.
		sysctl -qw fs.inotify.max_user_watches="524288" > /dev/null
	else
		# Set the inotify max_user_watches.
		sysctl -qw fs.inotify.max_user_watches="$MAX_WATCHES" > /dev/null
	fi

 

So in my /boot/config/go file I add this line

sysctl -qw fs.inotify.max_user_instances="2048" > /dev/null

and when I "cat /proc/sys/fs/inotify/max_user_instances" it changed from 128 to 2048 like so.

2048 is an arbritary amount. since I changed fs.inotify.max_user_watches from the default of 524288 to 8388608 (x16) I increased fs.inotify.max_user_instances by the same factor.

 

I like to experiment with multiple docker projects often, lately Damselfly, and I ran into this inotify problem as well.

Edited by m8ty
  • Like 1
Link to comment
  • 2 months later...
On 10/12/2021 at 1:14 PM, m8ty said:

I like to experiment with multiple docker projects often, lately Damselfly, and I ran into this inotify problem as well.

m8ty - Any observations you can share on this?  I ran into the same issues with Damselfly and followed your lead to address inotify watches too.  Thanks for the help!

Link to comment
  • 2 weeks later...
On 1/4/2022 at 10:17 PM, Poke0 said:

m8ty - Any observations you can share on this?  I ran into the same issues with Damselfly and followed your lead to address inotify watches too.  Thanks for the help!

I have noticed increased ram usage with htop and a couple of “general protection fault” errors in my logs. But I can’t say for certain if either were due to the changes I made. I also have stopped testing damselfly and simply using only photostructure at this time to manage the available ram better. 

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.