Toskache Posted October 12, 2020 Share Posted October 12, 2020 A few weeks ago I upgraded my PC and the unraid-Server with 10G network interfaces. Until then, I had only configured one SSD as a cache disk. This cache disk had the file system "xfs" and was of course the bottleneck to exhaust the 10G performance. So I installed an identical, second SSD and configured it as a cache pool (btrfs, raid0). Strangely, this had no effect on the write performance over the 10G interface. I get max ~ 300 MB/s when transferring a large file to a share (with cache = prefer). Is there anything else I'm missing here? Diagnostics data is attached. During the test with dd: sync; dd if=/dev/zero of=/mnt/cache/testfile.img bs=5G count=1; sync I realized, that the max. filesize for the cache is 2G. Is that correct? I used to write larger files to the unraid-NAS, so how will the cache behave? nas.fritz.box-diagnostics-20201012-1601.zip Quote Link to comment
JorgeB Posted October 12, 2020 Share Posted October 12, 2020 4 minutes ago, Toskache said: I get max ~ 300 MB/s when transferring a large file Have you done an iperf test? See what you get for a single stream. Quote Link to comment
Toskache Posted October 12, 2020 Author Share Posted October 12, 2020 (edited) 11 minutes ago, JorgeB said: Have you done an iperf test? See what you get for a single stream. The iperf-performance seems to bee ok: toskache@10GPC ~ % iperf3 -c 192.168.2.4 Connecting to host 192.168.2.4, port 5201 [ 5] local 192.168.2.199 port 50204 connected to 192.168.2.4 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 1.15 GBytes 9.90 Gbits/sec [ 5] 1.00-2.00 sec 1.15 GBytes 9.89 Gbits/sec [ 5] 2.00-3.00 sec 1.15 GBytes 9.89 Gbits/sec [ 5] 3.00-4.00 sec 1.15 GBytes 9.89 Gbits/sec [ 5] 4.00-5.00 sec 1.15 GBytes 9.89 Gbits/sec [ 5] 5.00-6.00 sec 1.15 GBytes 9.89 Gbits/sec [ 5] 6.00-7.00 sec 1.15 GBytes 9.89 Gbits/sec [ 5] 7.00-8.00 sec 1.15 GBytes 9.89 Gbits/sec [ 5] 8.00-9.00 sec 1.15 GBytes 9.90 Gbits/sec [ 5] 9.00-10.00 sec 1.15 GBytes 9.89 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 11.5 GBytes 9.89 Gbits/sec sender [ 5] 0.00-10.00 sec 11.5 GBytes 9.89 Gbits/sec receiver iperf Done. Edited October 12, 2020 by Toskache Quote Link to comment
itimpi Posted October 12, 2020 Share Posted October 12, 2020 19 minutes ago, Toskache said: I realized, that the max. filesize for the cache is 2G. Is that correct? No, what made you think that limit applied? The limit is going to be imposed by available space on the cache. Quote Link to comment
JorgeB Posted October 12, 2020 Share Posted October 12, 2020 Iperf looks good, enable disk shares and transfer directly to cache (\\tower\cache), see if that makes any difference. Quote Link to comment
Toskache Posted October 12, 2020 Author Share Posted October 12, 2020 21 minutes ago, JorgeB said: Iperf looks good, enable disk shares and transfer directly to cache (\\tower\cache), see if that makes any difference. Ok, I'll test that out tonight! Quote Link to comment
Toskache Posted October 12, 2020 Author Share Posted October 12, 2020 26 minutes ago, itimpi said: No, what made you think that limit applied? The limit is going to be imposed by available space on the cache. Ok, it was probably a mistake of mine. I was just surprised that files> 2G were never generated during the performance tests with "dd". Quote Link to comment
Toskache Posted October 17, 2020 Author Share Posted October 17, 2020 Sorry for the late response. @JorgeB I performed a retest with iperf: unraid as iperf3-Server: toskache@Hacky ~ % iperf3 -c 192.168.2.4 Connecting to host 192.168.2.4, port 5201 [ 5] local 192.168.2.26 port 53229 connected to 192.168.2.4 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 519 MBytes 4.35 Gbits/sec [ 5] 1.00-2.00 sec 504 MBytes 4.22 Gbits/sec [ 5] 2.00-3.00 sec 491 MBytes 4.12 Gbits/sec [ 5] 3.00-4.00 sec 498 MBytes 4.17 Gbits/sec [ 5] 4.00-5.00 sec 499 MBytes 4.18 Gbits/sec [ 5] 5.00-6.00 sec 437 MBytes 3.66 Gbits/sec [ 5] 6.00-7.00 sec 384 MBytes 3.22 Gbits/sec [ 5] 7.00-8.00 sec 424 MBytes 3.56 Gbits/sec [ 5] 8.00-9.00 sec 472 MBytes 3.96 Gbits/sec [ 5] 9.00-10.00 sec 501 MBytes 4.20 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 4.62 GBytes 3.97 Gbits/sec sender [ 5] 0.00-10.00 sec 4.62 GBytes 3.97 Gbits/sec receiver iperf Done. unraid as iperf3 client: toskache@Hacky ~ % iperf3 -s ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 192.168.2.4, port 45284 [ 5] local 192.168.2.26 port 5201 connected to 192.168.2.4 port 45286 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 1.02 GBytes 8.73 Gbits/sec [ 5] 1.00-2.00 sec 1.04 GBytes 8.92 Gbits/sec [ 5] 2.00-3.00 sec 1.04 GBytes 8.91 Gbits/sec [ 5] 3.00-4.00 sec 1.04 GBytes 8.95 Gbits/sec [ 5] 4.00-5.00 sec 1.04 GBytes 8.97 Gbits/sec [ 5] 5.00-6.00 sec 1.05 GBytes 8.98 Gbits/sec [ 5] 6.00-7.00 sec 1.04 GBytes 8.94 Gbits/sec [ 5] 7.00-8.00 sec 1.04 GBytes 8.92 Gbits/sec [ 5] 8.00-9.00 sec 1.04 GBytes 8.94 Gbits/sec [ 5] 9.00-10.00 sec 1.04 GBytes 8.93 Gbits/sec [ 5] 10.00-10.01 sec 6.23 MBytes 9.08 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.01 sec 10.4 GBytes 8.92 Gbits/sec receiver So there is a very asymmetric performance. At the same time you can see in the unraid dashboard that there are drops on the recipient side: I tried allready different cables an switch-ports. Both interfaces (unraid and PC) are working with an MTU of 1500. Unraid is on version 6.9.0-beta30. I don't know where the drops came from. 😞 I also enabled disk-shares and tested the cache-pool directly: 5GB-File: That seems to be the perfomance of a single SATA-SSD even though the cache-pool is set to raid0: At least the reading speed should be much higher, right? And: Thank you for your support! Quote Link to comment
JorgeB Posted October 18, 2020 Share Posted October 18, 2020 19 hours ago, Toskache said: Both interfaces (unraid and PC) are working with an MTU of 1500. 10GbE usually is faster with MTU=9000 Quote Link to comment
Toskache Posted October 18, 2020 Author Share Posted October 18, 2020 2 hours ago, JorgeB said: 10GbE usually is faster with MTU=9000 That is crystal clear, but my Router (Fritz!Box 6591 Cable) doesn‘t support jumbo frames MTU 1518 is the maximum. But the MTU-Size is not the reason for the drops!? Quote Link to comment
JorgeB Posted October 18, 2020 Share Posted October 18, 2020 43 minutes ago, Toskache said: But the MTU-Size is not the reason for the drops!? Shouldn't be. Quote Link to comment
gubbgnutten Posted October 18, 2020 Share Posted October 18, 2020 On 10/12/2020 at 4:08 PM, Toskache said: During the test with dd: sync; dd if=/dev/zero of=/mnt/cache/testfile.img bs=5G count=1; sync I realized, that the max. filesize for the cache is 2G. Is that correct? Never seen anyone go for a block size of 5G before, that’s literally orders of magnitude larger than commonly seen... How about a reasonable block size and increased count to match instead? 1 Quote Link to comment
Toskache Posted October 18, 2020 Author Share Posted October 18, 2020 1 hour ago, gubbgnutten said: Never seen anyone go for a block size of 5G before, that’s literally orders of magnitude larger than commonly seen... You are completely right! I Testet with various combinations of bs and count. The last one I copied in the thread realy not reasonable. With dd if=/dev/zero of=/mnt/cache/testfile.img bs=32K count=32000; sync I get 490 MB/s which is the ca. 90% of the double-performance of one single SATA-SSD. So the cache performance seems to be fine. For more performance i have to go with M.2/NVMe. The only problem now are the dropped RX packets. With 6.9.0-beta25 i did not see any droped rx-packets. I posted it in the beta-thread. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.