Jump to content

[6.10.3] Slow Cache Drive Performance


Recommended Posts

I don't want to highjack berta's thread, but my case is super identical.

I have nvme and ssd cache drives (two separate single disk caches) and a 10 gig network (both servers and desktops are with 10 gig NICs).

After upgrading to 6.10.3 my speeds are 100MB/s fixed, no more, no less.

It does not matter if I copy from or to the cache drives (tried both of them in both directions).

 

Reporting on Unraid's MAIN tab is also weird.

Tranfering 1.5TB of data from my Win 10 Desktop PC to the server (cache drive is empty and 2TB in size).

On the Unraid > Main > Pool devices > cache_ssd > write column ... the speeds reported are 0 MB/s for some time, then 300-450 MB/s ... then 0 MB/s again ...

Same behaviour can be observed with smaller 3-4GB files.

 

Before the Unraid version change the speeds were close to the maximum supported by the drives/network.

The only change in the setup was the new version.

 

Hopefully it would be fixed in some future upgrades, because I hate the idea of downgrading the OS.

Edited by EvilUSB
grammar
Link to comment
6 hours ago, EvilUSB said:

Reporting on Unraid's MAIN tab is also weird.

Tranfering 1.5TB of data from my Win 10 Desktop PC to the server (cache drive is empty and 2TB in size).

On the Unraid > Main > Pool devices > cache_ssd > write column ... the speeds reported are 0 MB/s for some time, the 300-450 MB/s ... then 0 MB/s again ...

Same behaviour can be observed with smaller 3-4GB files.

This suggests the devices are waiting for the data, then writing in quick bursts, start by checking NIC link speed is 10GbE and not 1GbE, if 10GbE run a single stream iperf test in both directions.

  • Thanks 1
Link to comment

Problem solved.

 

Thanks big time JorgeB!

 

I followed your advice ran iperf in both directions. It was holding 1Gbps.

Then double checked the interfaces.
Windows was reporting 10Gbps.
Unraid ethtool reported 10Gbps for eth0 ... but only 1Gbps for bond0.

 

My setup have single PCIe 10Gbps interface and two integrated 1Gbps interfaces bonded by default.

After disabling interface bonding (leaving bridging enabled) everything went back to normal.

In my case that solved the mystery.

 

Now ethtool reports 10gbps for both eth0 and br0.
iperf now shows around 8-8.5Gbps which is amazing.

 

If anyone is wandering how to disable interface bonding you have to:
- stop all VMs and disable VM Manager (Settings > VM Manager)
- stop all Dockers and disable Docker (Settings > Docker)
- make sure your primary NIC is eth0 (Settings > Network Settings > Interface Rules)
- change Enable bonding to No
- enable and start your VMs and Dockers

 

Unraid forums are the best!

  • Like 2
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...