[SOLVED] Process for /usr/local/sbin/shfs using up abnormal amounts of CPU.


Recommended Posts

Allow me to try and explain this. A few days ago I upgraded from 6.7 to 6.8 and this problem has developed since then. I have tried the following solutions, I've shut down all VMs and Dockers, I've turned off the one external VM which has NFS mount access to some of my shares and I've even rebooted and the same command comes back persistently using a good quarter of my 24 core system. 

 

/usr/local/sbin/shfs /mnt/user -disks 511 2048000000 -o noatime,allow_other -o remember=330

 

No disk check is being done. No mover process is running. Last parity check came back clean. 

 

I know this process is not unusual at all and a number of them run at all times on the system. But *one* of them is constantly sucking up a good quarter of all my CPU for reasons I can not seem to isolate. I've posted a screenshot and a dump of my current diagnostics, any help would be welcome in tracking down my rogue process or plugin that is doing this? 

Thanks! 

 

- Urbancore. 

htop-example.jpg

tower-diagnostics-20200101-2301.zip

Edited by urbancore
Link to comment

Marking this solved. So for those who are reading this with possible similar issues here was what the problem was. 

 

High I/O wait as shown here in glances caused the GUI to spit out a higher cpu use then htop or top showed. The cause of this was an external plex box I built for using quicksync hardware transcoding which was attached to the unRaid box via NFS which was doing library scans and thumbnail generation sucking up a ton of bandwidth (like 6/800 mbps) and that was causing the I/O delays. 

 

So apparently the GUI reads I/O wait as CPU use in some cases. I found this using Glances and Netdata dockers. 

glances.png

GUItop.png

  • Thanks 2
Link to comment
  • 1 year later...
  • 2 months later...
  • 1 month later...

I know this is an old thread, but I am having trouble sorting this out.

Here is the command running:

 /usr/local/sbin/shfs /mnt/user -disks 511 -o noatime,allow_other -o remember=330
It's using up almost all my CPU.

I don't have anything set up like described above.

How can I sort out the source of this problem?

Link to comment
  • 2 months later...
On 10/18/2021 at 11:49 PM, volcs0 said:

I know this is an old thread, but I am having trouble sorting this out.

Here is the command running:

 /usr/local/sbin/shfs /mnt/user -disks 511 -o noatime,allow_other -o remember=330
It's using up almost all my CPU.

I don't have anything set up like described above.

How can I sort out the source of this problem?

did you ever solve this? I've got a similar one

 /usr/local/sbin/shfs /mnt/user -disks 15 -o noatime,allow_other -o remember=0

Link to comment
  • 6 months later...
  • 1 month later...
  • 1 month later...

I'm curious about this also. 2 10 core 20 thread xeon cpus are getting pegged at 100% and thats even disabling vm's and docker. Shfs and python 3 are swapping back and forth maxing cpu's. And since upgrading to the latest and greatest version of unraid my plex container is all fouled up. Well. All these problems started after the upgrade. 

Link to comment
  • 1 month later...
  • 4 weeks later...
  • 4 weeks later...
  • 2 months later...
  • 2 weeks later...
On 11/28/2022 at 10:34 PM, Gregg said:

Similar issue. I've narrowed mine down to two dockers on my system, resilio-sync and speedtest-tracker. The pid and usage goes away if both of the dockers are shutdown. I would guess there is an issue in the docker system not these specific containers.


I was suspecting my Jellyfin running on a little NUC for transcoding, but it actually was the speedtest-tracker!
Thanks for the catch!

 

EDIT: I just realised I also had set folder caching to also scan "/mnt/user/". This was another culprit that kept hitting the CPU.

Edited by Dalewn
Link to comment
  • 4 weeks later...
  • 1 month later...

I've just come across this issue after upgrading to 6.12.1 yesterday.  I've never seen it happen before.  It turned out to be Jellyfin in my case causing the issue.  As soon as I stopped that docker, my CPU usage dropped from maxing out all the cores to a steady 20%.

Link to comment
  • 2 months later...

I seem to be having the same issue since upgrading to 6.12.3. I've disabled all plugins, docker and vm. Still pegging 80+ %CPU

 

2 things are using CPU:

 

1) find /mnt/plex_cache/Plex -type f -maxdepth 18

Not sure why this is being pegged when the only thing that uses that drive is Plex docker, but its currently disabled/not even started.

 

2) /usr/local/bin/shfs /mnt/user -disks 32767 -o default_permissions,allow_other,noatime -o remember=0

No clue what that is...

 

Anyone able to help me pinpoint this further?

 

unraid-diagnostics-20230827-1401.zip

Link to comment
2 hours ago, Brandonb1987 said:

1) find /mnt/plex_cache/Plex -type f -maxdepth 18

Not sure why this is being pegged when the only thing that uses that drive is Plex docker, but its currently disabled/not even started.

 

Smells like a process for Cache Dirs. Maybe push that into excludes list setting for the cachedirs plugin?

Link to comment
  • 4 months later...
  • 2 weeks later...
  • 1 month later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.