DuckBrained Posted August 7, 2021 Share Posted August 7, 2021 (edited) I'll try and provide as much info as possible but please ask for anything I've missed out. UnRAID configuration: 4 SSDs in the array with 1 for parity. I have a cache pool of 2 x 970 EVO Plus NVME drives. UnRAID has a 10Gbe NIC, 10Gbe switch, MacBook Air with 10Gbe network via a thunderbolt dock. Using BlackMagic Disk Speed Test I see write speeds of no higher than 300MB/s whether writing to a share that uses cache or one that doesn't. However I get read speeds of just over 1000MB/s on both, which is what I'd expect as write speeds to cache too given the network is slower than the drives. One odd thing to note: when writing to the shares, one of my ISOLATED cpu cores hits 100%. So this is a CPU assigned to a VM and pinned/isolated so that UnRAID itself shouldn't use it. I shut down the VM that uses that CPU and see the same behaviour. This is using SMB shares. Suggestions welcome. Edited August 7, 2021 by DuckBrained Quote Link to comment
trurl Posted August 7, 2021 Share Posted August 7, 2021 Note: SSDs in the array can't be trimmed. Writes to the parity array will always be slower than single disk write speed due to realtime parity updates. See here: https://wiki.unraid.net/Manual/Storage_Management#Array_Write_Modes for an explanation of the 2 different choices for parity writes, one is somewhat faster than the other at the expense of needing to read all other disks. Of course, with SSDs no waiting for platter rotation, but still, more I/O has to take place to keep parity in sync. Quote Link to comment
DuckBrained Posted August 7, 2021 Author Share Posted August 7, 2021 Why would parity affect writing to the cache? Surely that should be at full speed, and also doesn't explain the weird CPU spike. I tried the different write modes to no avail. Cheers Quote Link to comment
DuckBrained Posted August 7, 2021 Author Share Posted August 7, 2021 Just for more clarity, I'm running a Ryzen 9 3950X and 64GB RAM so there should be no system bottlenecks in terms of performance. Quote Link to comment
DuckBrained Posted August 7, 2021 Author Share Posted August 7, 2021 More testing: I installed a disk speed docker and write performance is as expected (3GB/sec on the NVME, 500MB/sec on SATA SSD). I also ran a network speed test from within a VM on UnRAID and see the same write speed performance issue. Quote Link to comment
DuckBrained Posted August 8, 2021 Author Share Posted August 8, 2021 Update: I ran iPerf, and I see only poor speeds. From client (Mac) to server (UnRAID) [ 5] 6.00-7.00 sec 320 MBytes 2.68 Gbits/sec I flipped the settings and see from UnRAID to client: [ 5] 1.00-2.00 sec 1.09 GBytes 9.38 Gbits/sec 0 656 KBytes So it's a networking issue, but why is a big puzzle right now. That CPU spike on transfer to the server is puzzling me. Quote Link to comment
DuckBrained Posted August 8, 2021 Author Share Posted August 8, 2021 Futher testing. I shut down all of the VMs and Dockers on UnRAID and now I get full speed writes: [ 4] 8.00-9.00 sec 1.09 GBytes 9.40 Gbits/sec So is this a resource issue? Quote Link to comment
DuckBrained Posted August 8, 2021 Author Share Posted August 8, 2021 Another update. As I start each Windows VM that uses br0, the iperf speeds drop by approx 1Gbit/s, once they are all (7) launched I'm down to 3Gbit/s. As I shut down each VM, the speed increases again. When they are all shut down, I get full network speed. If I assign a Windows VM to virbr0 instead, it has no effect on the network speed. So it's something to do with the bridge. Problem is, virbr0 is no good for me as the Windows machines need to be able to accept incoming traffic. So, is this a bug? My ethernet controller is: Ethernet controller: Aquantia Corp. AQC107 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] (rev 02) Thanks Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.