(SOLVED) Cache: Performance and max filesize


Toskache

Recommended Posts

A few weeks ago I upgraded my PC and the unraid-Server with 10G network interfaces.

Until then, I had only configured one SSD as a cache disk. This cache disk had the file system "xfs" and was of course the bottleneck to exhaust the 10G performance.

So I installed an identical, second SSD and configured it as a cache pool (btrfs, raid0).
Strangely, this had no effect on the write performance over the 10G interface. I get max ~ 300 MB/s when transferring a large file to a share (with cache = prefer). Is there anything else I'm missing here? Diagnostics data is attached.

 

During the test with dd:
 

sync; dd if=/dev/zero of=/mnt/cache/testfile.img bs=5G count=1; sync

I realized, that the max. filesize for the cache is 2G. Is that correct? I used to write larger files to the unraid-NAS, so how will the cache behave?

nas.fritz.box-diagnostics-20201012-1601.zip

Link to comment
11 minutes ago, JorgeB said:

Have you done an iperf test? See what you get for a single stream.

The iperf-performance seems to bee ok:

 

toskache@10GPC ~ % iperf3 -c 192.168.2.4
Connecting to host 192.168.2.4, port 5201
[  5] local 192.168.2.199 port 50204 connected to 192.168.2.4 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  1.15 GBytes  9.90 Gbits/sec
[  5]   1.00-2.00   sec  1.15 GBytes  9.89 Gbits/sec
[  5]   2.00-3.00   sec  1.15 GBytes  9.89 Gbits/sec
[  5]   3.00-4.00   sec  1.15 GBytes  9.89 Gbits/sec
[  5]   4.00-5.00   sec  1.15 GBytes  9.89 Gbits/sec
[  5]   5.00-6.00   sec  1.15 GBytes  9.89 Gbits/sec
[  5]   6.00-7.00   sec  1.15 GBytes  9.89 Gbits/sec
[  5]   7.00-8.00   sec  1.15 GBytes  9.89 Gbits/sec
[  5]   8.00-9.00   sec  1.15 GBytes  9.90 Gbits/sec
[  5]   9.00-10.00  sec  1.15 GBytes  9.89 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  11.5 GBytes  9.89 Gbits/sec                  sender
[  5]   0.00-10.00  sec  11.5 GBytes  9.89 Gbits/sec                  receiver

iperf Done.

 

Edited by Toskache
Link to comment

Sorry for the late response.

@JorgeB I performed a retest with iperf:

unraid as iperf3-Server:

toskache@Hacky ~ % iperf3 -c 192.168.2.4
Connecting to host 192.168.2.4, port 5201
[  5] local 192.168.2.26 port 53229 connected to 192.168.2.4 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   519 MBytes  4.35 Gbits/sec
[  5]   1.00-2.00   sec   504 MBytes  4.22 Gbits/sec
[  5]   2.00-3.00   sec   491 MBytes  4.12 Gbits/sec
[  5]   3.00-4.00   sec   498 MBytes  4.17 Gbits/sec
[  5]   4.00-5.00   sec   499 MBytes  4.18 Gbits/sec
[  5]   5.00-6.00   sec   437 MBytes  3.66 Gbits/sec
[  5]   6.00-7.00   sec   384 MBytes  3.22 Gbits/sec
[  5]   7.00-8.00   sec   424 MBytes  3.56 Gbits/sec
[  5]   8.00-9.00   sec   472 MBytes  3.96 Gbits/sec
[  5]   9.00-10.00  sec   501 MBytes  4.20 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  4.62 GBytes  3.97 Gbits/sec                  sender
[  5]   0.00-10.00  sec  4.62 GBytes  3.97 Gbits/sec                  receiver

iperf Done.

unraid as iperf3 client:

toskache@Hacky ~ % iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.2.4, port 45284
[  5] local 192.168.2.26 port 5201 connected to 192.168.2.4 port 45286
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  1.02 GBytes  8.73 Gbits/sec
[  5]   1.00-2.00   sec  1.04 GBytes  8.92 Gbits/sec
[  5]   2.00-3.00   sec  1.04 GBytes  8.91 Gbits/sec
[  5]   3.00-4.00   sec  1.04 GBytes  8.95 Gbits/sec
[  5]   4.00-5.00   sec  1.04 GBytes  8.97 Gbits/sec
[  5]   5.00-6.00   sec  1.05 GBytes  8.98 Gbits/sec
[  5]   6.00-7.00   sec  1.04 GBytes  8.94 Gbits/sec
[  5]   7.00-8.00   sec  1.04 GBytes  8.92 Gbits/sec
[  5]   8.00-9.00   sec  1.04 GBytes  8.94 Gbits/sec
[  5]   9.00-10.00  sec  1.04 GBytes  8.93 Gbits/sec
[  5]  10.00-10.01  sec  6.23 MBytes  9.08 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.01  sec  10.4 GBytes  8.92 Gbits/sec                  receiver

 

So there is a very asymmetric performance.

At the same time you can see in the unraid dashboard that there are drops on the recipient side:

image.png.c91693a8aed583dc5ccd880de06e84d9.png

I tried allready different cables an switch-ports. Both interfaces (unraid and PC) are working with an MTU of 1500.

Unraid is on version 6.9.0-beta30.

I don't know where the drops came from. 😞

 

I also enabled disk-shares and tested the cache-pool directly:

5GB-File:

image.png.b09097d7ffa25de0fa06ad989beb88d4.png

 

That seems to be the perfomance of a single SATA-SSD even though the cache-pool is set to raid0:

image.png.744993f7182d7e713cf56d04f140da6a.png

 

At least the reading speed should be much higher, right?

 

And: Thank you for your support!

 

 

 

 

Link to comment
On 10/12/2020 at 4:08 PM, Toskache said:

During the test with dd:
 


sync; dd if=/dev/zero of=/mnt/cache/testfile.img bs=5G count=1; sync

I realized, that the max. filesize for the cache is 2G. Is that correct?

Never seen anyone go for a block size of 5G before, that’s literally orders of magnitude larger than commonly seen...

 

How about a reasonable block size and increased count to match instead?

  • Like 1
Link to comment
1 hour ago, gubbgnutten said:

Never seen anyone go for a block size of 5G before, that’s literally orders of magnitude larger than commonly seen...

You are completely right! I Testet with various combinations of bs and count. The last one I copied in the thread realy not reasonable.

With

dd if=/dev/zero of=/mnt/cache/testfile.img bs=32K count=32000; sync

I get 490 MB/s which is the ca. 90% of the double-performance of one single SATA-SSD. So the cache performance seems to be fine. For more performance i have to go with M.2/NVMe.

 

The only problem now are the dropped RX packets. With 6.9.0-beta25 i did not see any droped rx-packets. I posted it in the beta-thread.

Link to comment
  • Toskache changed the title to (SOLVED) Cache: Performance and max filesize

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.