TheGood Posted November 29, 2019 Share Posted November 29, 2019 Can anyone more knowledgeable than me tell me if the idle cpu usage i see on those 2 processes is normal? No active streams, no cron jobs (as far as I am aware), they idle there even after a reboot. Attaching diagnostics and top. unraid-diagnostics-20191129-0822.zip Quote Link to comment
nerbonne Posted November 29, 2019 Share Posted November 29, 2019 I'm having the same issue with shfs using alot of CPU, any time I load new media to plex, shfs CPU goes sky high and plex gets really slow to the point of not loading. I'm on 6.8.0 RC7 because 6.7.2 was a mess with high IO wait, but seems like 6.8.0 is just as bad. tower-diagnostics-20191129-1211.zip Quote Link to comment
Squid Posted November 29, 2019 Share Posted November 29, 2019 2 hours ago, nerbonne said: I'm having the same issue with shfs using alot of CPU, any time I load new media to plex, shfs CPU goes sky high and plex gets really slow to the point of not loading. I'm on 6.8.0 RC7 because 6.7.2 was a mess with high IO wait, but seems like 6.8.0 is just as bad. tower-diagnostics-20191129-1211.zip 184.32 kB · 0 downloads How is it if you're not transcoding at the same time? PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 30086 nobody 20 0 1980108 266880 13300 R 476.5 0.4 22:25.85 Plex Tran+ 32559 nobody 20 0 5329376 1.8g 36160 S 52.9 2.9 68:29.85 Plex Medi+ Also, why not strictly related is there a reason you're running a 220G docker.img? You probably don't need more than 20G 1 Quote Link to comment
nerbonne Posted November 30, 2019 Share Posted November 30, 2019 So the stream that is already started is able to continue being watched, but no new streams can be started because the server will not respond. I optimized the database and it seems to be working better. IRT to the 220G docker image, there was a time where I ran out of space, multiple times. Must have been a log file out of control or something. I see that docker usage is around 14G so I'll lower it to 20G, thanks for the tip! Quote Link to comment
skoj Posted January 15, 2020 Share Posted January 15, 2020 (edited) <EDIT> Please ignore, not an unraid problem. Turns out a host was scanning a share 24/7. </EDIT> I've the same issue since upgrading from 6.7.0 to 6.8.1. All dockers turned off, no clients connected. Idle cpu was 5-10% before the upgrade. Has anyone run into this & fixed it? Minor impact but I hate to lose the capacity. Other odd thing is that there's network traffic even though all of the disks are spun down & the I/O counters are at very nearly zero. Edited January 20, 2020 by skoj Fixed problem. Quote Link to comment
Joe Posted May 24, 2020 Share Posted May 24, 2020 (edited) Having similar issue. Trying to figure out why my server is using so much power when all the drives are spun down. How did you get that nice graphic of processes. It is not top. Here is my top. top - 00:39:55 up 15:54, 1 user, load average: 2.74, 2.04, 1.88 Tasks: 366 total, 1 running, 365 sleeping, 0 stopped, 0 zombie %Cpu(s): 4.5 us, 11.4 sy, 0.0 ni, 83.8 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st MiB Mem : 32182.9 total, 267.6 free, 10365.1 used, 21550.2 buff/cache MiB Swap: 0.0 total, 0.0 free, 0.0 used. 20385.8 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 11500 root 20 0 1051020 349884 1340 S 65.4 1.1 470:38.53 shfs 30754 root 20 0 9105560 8.1g 20608 S 29.2 25.8 586:22.35 qemu-system-x86 12026 nobody 20 0 65600 27576 19920 S 22.9 0.1 187:22.22 smbd 11026 root 20 0 283640 4312 3568 S 0.7 0.0 4:53.21 emhttpd Edited May 24, 2020 by Joe Quote Link to comment
orlando500 Posted June 28, 2020 Share Posted June 28, 2020 On 5/24/2020 at 6:40 AM, Joe said: Having similar issue. Trying to figure out why my server is using so much power when all the drives are spun down. How did you get that nice graphic of processes. It is not top. Here is my top. top - 00:39:55 up 15:54, 1 user, load average: 2.74, 2.04, 1.88 Tasks: 366 total, 1 running, 365 sleeping, 0 stopped, 0 zombie %Cpu(s): 4.5 us, 11.4 sy, 0.0 ni, 83.8 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st MiB Mem : 32182.9 total, 267.6 free, 10365.1 used, 21550.2 buff/cache MiB Swap: 0.0 total, 0.0 free, 0.0 used. 20385.8 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 11500 root 20 0 1051020 349884 1340 S 65.4 1.1 470:38.53 shfs 30754 root 20 0 9105560 8.1g 20608 S 29.2 25.8 586:22.35 qemu-system-x86 12026 nobody 20 0 65600 27576 19920 S 22.9 0.1 187:22.22 smbd 11026 root 20 0 283640 4312 3568 S 0.7 0.0 4:53.21 emhttpd did you figure it out? i have same issue 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.