Dataone Posted August 21, 2019 Share Posted August 21, 2019 Hello, I have been having this issue for as long as I've used Unraid so I've just assumed that it is due to the low CPU power of the G3258. However, I am wondering if there is anything I should be tuning/tweaking to potentially fix this. Whenever I transfer files over NFS (to & from the server) the CPU usage of the G3258 rises to 100% with a rough throughput of ~30MB/s. I do not think this is due to the low power of my CPU however, as transfering the same files from one disk to another only uses maximum around ~60% of the CPU, with a throughput of the disk maximum (~120MB/s) Disk Settings: NFS Settings: I can't see how my CPU usage would rise ~40% just by transferring a file over my network, rather than locally. Unless it does require that much processing power. Any ideas? Cheers Quote Link to comment
Frank1940 Posted August 21, 2019 Share Posted August 21, 2019 In a Terminal session (either via SSH or GUI), type htop on the command line. That should give a picture of what your CPU hog on a real time basis. Quote Link to comment
Dataone Posted August 21, 2019 Author Share Posted August 21, 2019 1 hour ago, Frank1940 said: In a Terminal session (either via SSH or GUI), type htop on the command line. That should give a picture of what your CPU hog on a real time basis. I'm a little confused as to what I'm seeing honestly, load is sky high yet shfs only appears to be a using a few percent. Cheers Quote Link to comment
Struck Posted August 21, 2019 Share Posted August 21, 2019 5 minutes ago, Dataone said: I'm a little confused as to what I'm seeing honestly, load is sky high yet shfs only appears to be a using a few percent. Cheers Try using top instead. it also shows the io wait like this: %Cpu(s): 0.3 us, 1.1 sy, 0.5 ni, 98.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st If wa is high, it is waiting for your disks. Quote Link to comment
Dataone Posted August 21, 2019 Author Share Posted August 21, 2019 6 hours ago, Struck said: Try using top instead. it also shows the io wait like this: %Cpu(s): 0.3 us, 1.1 sy, 0.5 ni, 98.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st If wa is high, it is waiting for your disks. Well that appears to be the case unfortunately. However I'm still unsure as to why I'm seeing a massive CPU usage hike and throughput decrease when using NFS rather than transferring files locally. Locally: %Cpu(s): 0.7 us, 13.5 sy, 0.2 ni, 57.4 id, 26.7 wa, 0.0 hi, 1.5 si, 0.0 st NFS: %Cpu(s): 1.0 us, 7.7 sy, 0.3 ni, 0.2 id, 88.4 wa, 0.0 hi, 2.3 si, 0.0 st These aren't shabby drives either, White Label WD 8TB's, which is why I'm suprised I'm having 'waiting' issues. Any ideas/possibilities of tuning this to make it work a little better? Thanks. Quote Link to comment
Frank1940 Posted August 21, 2019 Share Posted August 21, 2019 (edited) Are you sure that wait time truly is a problem? The only way that wait time would be a problem is if was slowing down some other process that was running concurrently. Are you seeing any real slowdowns in things like the GUI response? Your first screen capture (the one showing both CPU at 100%) is of the Dashboard screen of the GUI. Was it sluggish or non-responsive? EDIT: Have you tried a SMB transfer to see what happens there? I know NFS used to have less overhead than SMB did a few years but I not sure this is still the case. Edited August 21, 2019 by Frank1940 Quote Link to comment
Frank1940 Posted August 21, 2019 Share Posted August 21, 2019 I did a quick test using SMB and this what I got: %Cpu(s): 2.4 us, 7.7 sy, 0.2 ni, 80.1 id, 5.0 wa, 0.0 hi, 4.6 si, 0.0 st Transfer speed was running at 110Mbs. (It was a 25GB file.) Pass mark is 5482 for my CPU. Your CPU's passmark is 3868. Quote Link to comment
Dataone Posted August 22, 2019 Author Share Posted August 22, 2019 (edited) %Cpu(s): 1.2 us, 5.9 sy, 0.2 ni, 88.6 id, 2.9 wa, 0.0 hi, 1.3 si, 0.0 st That's the output while speed seems to be throttled, with SMB. I'm starting to think this might actually be an issue with my client and not the server. Here is my network history graph from client: As you can see it starts off fine (perhaps a buffer?) then drops to a crawl just like NFS. I think this means its not a server issue. I might need to look into changing my IO scheduler. Cheers Edited August 22, 2019 by Dataone Quote Link to comment
dalben Posted August 22, 2019 Share Posted August 22, 2019 There's an ongoing thread in the bug reports forum that is trying to figure out really bad concurrent array performance. I wonder if they are related? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.