Jump to content

Slow write speeds to UnRAID shares


Recommended Posts

I'll try and provide as much info as possible but please ask for anything I've missed out.

 

UnRAID configuration: 4 SSDs in the array with 1 for parity. I have a cache pool of 2 x 970 EVO Plus NVME drives. 

UnRAID has a 10Gbe NIC, 10Gbe switch, MacBook Air with 10Gbe network via a thunderbolt dock.

 

Using BlackMagic Disk Speed Test I see write speeds of no higher than 300MB/s whether writing to a share that uses cache or one that doesn't. However I get read speeds of just over 1000MB/s on both, which is what I'd expect as write speeds to cache too given the network is slower than the drives.

 

One odd thing to note: when writing to the shares, one of my ISOLATED cpu cores hits 100%. So this is a CPU assigned to a VM and pinned/isolated so that UnRAID itself shouldn't use it. I shut down the VM that uses that CPU and see the same behaviour. 

 

This is using SMB shares.

 

Suggestions welcome.

Edited by DuckBrained
Link to comment

Note: SSDs in the array can't be trimmed.

 

Writes to the parity array will always be slower than single disk write speed due to realtime parity updates. See here:

 

https://wiki.unraid.net/Manual/Storage_Management#Array_Write_Modes

 

for an explanation of the 2 different choices for parity writes, one is somewhat faster than the other at the expense of needing to read all other disks.

 

Of course, with SSDs no waiting for platter rotation, but still, more I/O has to take place to keep parity in sync.

Link to comment

Update: I ran iPerf, and I see only poor speeds. From client (Mac) to server (UnRAID)

 

[  5]   6.00-7.00   sec   320 MBytes  2.68 Gbits/sec  

 

I flipped the settings and see from UnRAID to client:

 

[  5]   1.00-2.00   sec  1.09 GBytes  9.38 Gbits/sec    0    656 KBytes       

 

So it's a networking issue, but why is a big puzzle right now. That CPU spike on transfer to the server is puzzling me.

Link to comment

Another update.

 

As I start each Windows VM that uses br0, the iperf speeds drop by approx 1Gbit/s, once they are all (7) launched I'm down to 3Gbit/s.

 

As I shut down each VM, the speed increases again. When they are all shut down, I get full network speed.

 

If I assign a Windows VM to virbr0 instead, it has no effect on the network speed. So it's something to do with the bridge.

 

Problem is, virbr0 is no good for me as the Windows machines need to be able to accept incoming traffic.

 

So, is this a bug? My ethernet controller is: Ethernet controller: Aquantia Corp. AQC107 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] (rev 02)

 

Thanks

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...