urbancore Posted January 2, 2020 Share Posted January 2, 2020 (edited) Allow me to try and explain this. A few days ago I upgraded from 6.7 to 6.8 and this problem has developed since then. I have tried the following solutions, I've shut down all VMs and Dockers, I've turned off the one external VM which has NFS mount access to some of my shares and I've even rebooted and the same command comes back persistently using a good quarter of my 24 core system. /usr/local/sbin/shfs /mnt/user -disks 511 2048000000 -o noatime,allow_other -o remember=330 No disk check is being done. No mover process is running. Last parity check came back clean. I know this process is not unusual at all and a number of them run at all times on the system. But *one* of them is constantly sucking up a good quarter of all my CPU for reasons I can not seem to isolate. I've posted a screenshot and a dump of my current diagnostics, any help would be welcome in tracking down my rogue process or plugin that is doing this? Thanks! - Urbancore. tower-diagnostics-20200101-2301.zip Edited January 2, 2020 by urbancore Quote Link to comment
urbancore Posted January 2, 2020 Author Share Posted January 2, 2020 HTOP is wildly different then the web gui which I am adding here. Is this a web gui bug? Quote Link to comment
urbancore Posted January 2, 2020 Author Share Posted January 2, 2020 Marking this solved. So for those who are reading this with possible similar issues here was what the problem was. High I/O wait as shown here in glances caused the GUI to spit out a higher cpu use then htop or top showed. The cause of this was an external plex box I built for using quicksync hardware transcoding which was attached to the unRaid box via NFS which was doing library scans and thumbnail generation sucking up a ton of bandwidth (like 6/800 mbps) and that was causing the I/O delays. So apparently the GUI reads I/O wait as CPU use in some cases. I found this using Glances and Netdata dockers. 2 Quote Link to comment
nicolas246 Posted June 17, 2021 Share Posted June 17, 2021 Hi, I had the exact same issue. For me the reason was the Watchdog application i run on my OSMC media player. Once i disabled that, the issue was gone. It didnt matter if i used NFS or SMB. Quote Link to comment
chepnut Posted September 14, 2021 Share Posted September 14, 2021 So glad I stumbled onto this thread, I have been fighting this issue for such a long time. Quote Link to comment
volcs0 Posted October 19, 2021 Share Posted October 19, 2021 I know this is an old thread, but I am having trouble sorting this out. Here is the command running: /usr/local/sbin/shfs /mnt/user -disks 511 -o noatime,allow_other -o remember=330 It's using up almost all my CPU. I don't have anything set up like described above. How can I sort out the source of this problem? Quote Link to comment
sage2050 Posted December 28, 2021 Share Posted December 28, 2021 On 10/18/2021 at 11:49 PM, volcs0 said: I know this is an old thread, but I am having trouble sorting this out. Here is the command running: /usr/local/sbin/shfs /mnt/user -disks 511 -o noatime,allow_other -o remember=330 It's using up almost all my CPU. I don't have anything set up like described above. How can I sort out the source of this problem? did you ever solve this? I've got a similar one /usr/local/sbin/shfs /mnt/user -disks 15 -o noatime,allow_other -o remember=0 Quote Link to comment
mathiasdoe Posted July 4, 2022 Share Posted July 4, 2022 On 12/28/2021 at 5:53 PM, sage2050 said: did you ever solve this? I've got a similar one /usr/local/sbin/shfs /mnt/user -disks 15 -o noatime,allow_other -o remember=0 What about you? Did you fix it? I have the exact same issue. Quote Link to comment
sNyteX Posted August 11, 2022 Share Posted August 11, 2022 (edited) I have the same problem it eats 20-50% CPU even if Docker, VMs and Folder Caching are disabled . I hope someone has a solution. tower-diagnostics-20220811-2152.zip Edited August 11, 2022 by sNyteX Quote Link to comment
Mindsgoneawol Posted September 30, 2022 Share Posted September 30, 2022 I'm curious about this also. 2 10 core 20 thread xeon cpus are getting pegged at 100% and thats even disabling vm's and docker. Shfs and python 3 are swapping back and forth maxing cpu's. And since upgrading to the latest and greatest version of unraid my plex container is all fouled up. Well. All these problems started after the upgrade. Quote Link to comment
thehandyman Posted November 6, 2022 Share Posted November 6, 2022 Just chiming in I'm in the same situation: high `shfs` usage such that the server is almost unusable. Quote Link to comment
Gregg Posted November 28, 2022 Share Posted November 28, 2022 Similar issue. I've narrowed mine down to two dockers on my system, resilio-sync and speedtest-tracker. The pid and usage goes away if both of the dockers are shutdown. I would guess there is an issue in the docker system not these specific containers. 1 Quote Link to comment
Jackaryas Posted December 21, 2022 Share Posted December 21, 2022 (edited) I also had this issue, mine was related to Tdarr running on a different machine running constant folder watches over NFS network drive. Reducing the frequency of folder watching greatly reduced the CPU usage of SHFS Edited December 22, 2022 by Jackaryas Quote Link to comment
footkaput Posted March 17, 2023 Share Posted March 17, 2023 (edited) I'll add I had the same issue. It was Emby with real time monitoring of libraries turned on. I previously had 9x usb drives plugged in direct to the same PC as Emby (windows 11 PC). Now emby connects via a mapped network drive. Edited March 17, 2023 by footkaput Quote Link to comment
Dalewn Posted March 30, 2023 Share Posted March 30, 2023 (edited) On 11/28/2022 at 10:34 PM, Gregg said: Similar issue. I've narrowed mine down to two dockers on my system, resilio-sync and speedtest-tracker. The pid and usage goes away if both of the dockers are shutdown. I would guess there is an issue in the docker system not these specific containers. I was suspecting my Jellyfin running on a little NUC for transcoding, but it actually was the speedtest-tracker! Thanks for the catch! EDIT: I just realised I also had set folder caching to also scan "/mnt/user/". This was another culprit that kept hitting the CPU. Edited March 30, 2023 by Dalewn Quote Link to comment
cinereus Posted April 26, 2023 Share Posted April 26, 2023 Same issue here just started today. Hundreds of `user/local/sbin/shfs /mnt/user` processes (one in particular) using up 150%+ CPU and causing server to overheat. Quote Link to comment
Melawen Posted June 25, 2023 Share Posted June 25, 2023 I've just come across this issue after upgrading to 6.12.1 yesterday. I've never seen it happen before. It turned out to be Jellyfin in my case causing the issue. As soon as I stopped that docker, my CPU usage dropped from maxing out all the cores to a steady 20%. Quote Link to comment
Brandonb1987 Posted August 27, 2023 Share Posted August 27, 2023 I seem to be having the same issue since upgrading to 6.12.3. I've disabled all plugins, docker and vm. Still pegging 80+ %CPU 2 things are using CPU: 1) find /mnt/plex_cache/Plex -type f -maxdepth 18 Not sure why this is being pegged when the only thing that uses that drive is Plex docker, but its currently disabled/not even started. 2) /usr/local/bin/shfs /mnt/user -disks 32767 -o default_permissions,allow_other,noatime -o remember=0 No clue what that is... Anyone able to help me pinpoint this further? unraid-diagnostics-20230827-1401.zip Quote Link to comment
BRiT Posted August 27, 2023 Share Posted August 27, 2023 2 hours ago, Brandonb1987 said: 1) find /mnt/plex_cache/Plex -type f -maxdepth 18 Not sure why this is being pegged when the only thing that uses that drive is Plex docker, but its currently disabled/not even started. Smells like a process for Cache Dirs. Maybe push that into excludes list setting for the cachedirs plugin? Quote Link to comment
cinereus Posted January 19 Share Posted January 19 Same issue again today. Couldn't work out what was causing it. Quote Link to comment
cinereus Posted January 29 Share Posted January 29 On 4/26/2023 at 10:01 AM, cinereus said: Same issue here just started today. Hundreds of `user/local/sbin/shfs /mnt/user` processes (one in particular) using up 150%+ CPU and causing server to overheat. And again today, load average is 50.8! Quote Link to comment
justusjonas Posted March 20 Share Posted March 20 I have the exact same issue! Plex seems to be the trigger. Did anybody find a solution? Quote Link to comment
cinereus Posted March 26 Share Posted March 26 On 3/20/2024 at 8:36 PM, justusjonas said: I have the exact same issue! Plex seems to be the trigger. Did anybody find a solution? Unfortunately mine is not related to Plex. But it is eating all my CPU and I have no idea how to stop it... Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.