Jump to content

Writing to cache(ram) slower than iperf3 results on 10Gbe


Recommended Posts

Hallo All,

I'm still testing the array and system capabilities and I would like to discuss the write to Ram Cache in linux/unraid.

my system has 10gbe Mellanox connected to a Mikrotik switch. dual parity array with 11 data disks and two sata sandisk ssd in raid 1 (500Mb/s peak performance).

32gb ddr4 in single slot, one xeon 2630v3.

ram caching dirty ratio is invoked from tips and tweak plugin and set to 20% and 60%.

If I run Iperf3 in both directions from my Mac client with 10gbe connection I get 9.5/9.8Gbit/s average. In the swith interface I can confirm the bandwidth test while doing it as it shows grossly the same speed of Iperf.

when I try to tranfer a 10Gb mkv file  to the cache share I get around 450/500 Mb/s, which tops out the speed of the ssd, but, If I understood correctly, it should first saturate the ram cache before slowing down writing to the cache pool though at first saturating the 10gbe link.

I tried to play with the ratios of ram caching, but I do not get better results.

I did the test without dockers or VM running, so the ram is basically free.

I do not have other systems 10Gbe enabled to test further... I would like to have you more experienced opinion about this topic

Link to comment

Multiple streams are done when you use the -P flag, e.g, -P 5 will run 5 parallel streams, if it was single stream then yes, a transfer should start at about the same speed until RAM cache is filled, obviously only if the source computer can achieve those speeds, i.e. it's reading from an NVMe or another fast device/raid.

Link to comment

yes than single stream, the source drive is my mac internal nvme 1700Mb/s average.

how can I troubleshoot the issue? I can run Iperf3 and that seems to rule out the network. do you have other suggestions to thest ram througput?

 

Link to comment
1 hour ago, mo679 said:

Hi, thanks for the reply, how do you flush the Ram?

sync; echo 1 > /proc/sys/vm/drop_caches

 

1 hour ago, mo679 said:

It is strange because it looks like i reaches 5Gbit and than slows down to respectively hdd or ssd speed

It look like not enough RAM to cache whole file. Clear RAM cache then transfer file and check RAM cache usage. ( You set to 20% and 60%. only )

Edited by Vr2Io
Link to comment

Thanks Sir  for the hint, I tried to flush the cache, it actually shows no ram usage, 30Gb free, all dockers stopped. 

Iperf3 shows a healthy 9.7Gbits average. Transferring files shows a ramp up in ram usage until saturation, but transfer speed to the cache enabled share tops out at 4.8Gbits/s, reading from cache shows 450Mb/s which is normal speed for my ssd sata drive.

I changed the dirty cache parameters, but it alters just the time when the flush to the ssd drive happens not the speed, it actually doubles the time because the file will be first written to the Ram at 4.8Gbits and than to the ssd at circa the same speed, not writing to Ram results in faster total writes.

might it be the way unraid handles Shares? Is there a way to write to Ram only bypassing the shares, just for test? Like a ram disk/ram share?

 

I also tried blackmagic speed test, directed at the cache enabled share, and it shows 450Mb/s write and 515Mb/s read, which, to my understanding should be the speed of the ssd and not the Ram, right?

Link to comment
34 minutes ago, mo679 said:

but transfer speed to the cache enabled share tops out at 4.8Gbits/s

That means some issue in Network transfer and can't utilize the RAM speed, pls check

- NIC flow control off

- NIC offload on

- CPU governor in high performance and enable turbo

 

39 minutes ago, mo679 said:

I also tried blackmagic speed test, directed at the cache enabled share, and it shows 450Mb/s write and 515Mb/s read, which, to my understanding should be the speed of the ssd and not the Ram, right?

Not RAM issue, it is above issue

 

Belwo is a test FYR ( 23GB file with MTU 1500, file writing to RAM and it also flush to array disk slowly .... when whole file writing to RAM, the usage will slightly drop )

 

image.thumb.png.d4398a4dbd45bbeea04f506f1be1f5d3.png

 

 

 

Link to comment

Thanks Sir, I tried as you said, no changes, I tried to make a peer to peer connection to bypass the switch and connecting directly the computer and server, no changes.

 

i send two screenshots where you can see the transfer rate during file transfer and during iperf3 test, which indeed shows 1Gb/s

might it be my Ram is somehow limited (32gb single dimm ddr4 1866Mhz)?

5B31AAD3-8C50-4E72-AC9B-CB342B0A52B0.jpeg

6DB2E6C9-57BE-4729-864F-BA955F859141.jpeg

Link to comment

well than I'm lost, I have a macbook pro with nvme ssd (1800Mb/s read throughput) on one end and the server Gen9 Hp xeon as stated above. Bandwidth between Nics should be Ok as Iperf3 shows, than It should be some sort of software issue, or a SMB tweak. 

When I put 9000 MTU in the nics the Iperf3 went from 4.4Gbit to 9.8Gigabit. Might it be I have to configure SMB to take advantage of the 10Gbe?

Thanks in advance for your time

Link to comment

Hello all, it is indeed strange and it has to do with SHFS process as others reported, I mounted the cache as a share and finally got over 1GB/s writing to Ram. I digged in the forum and it looks like to be a know issue, I tried to follow and enable smb multichannel, but I seem to get similar results in transfer speed but more Cpu occupation and load.

so writing to a cache enabled share in my case, tops out at 450MB/s, while writing directly to the cache ssd by exposing the disks saturates 10gbe at more than 1GB/s. That’s a lot. More than double in my case. 
i wished this feature/issue with SHFS could have been made clearer, it would have spared me a bit of time, 

do you think this behaviour is affecting every configuration or just some configs are affected?

will it be addressed?

 

Now I have an Odd situation.. 

I ruled out the network, I can get 10gbe speed across my workstation and server, transferring file to the cache is fast…but the file transfer window in my Mac computer hangs until the whole file is back written to the ssd, at slower speed of course, thus making this whole ram acceleration magic pointless. 
Is it normal to wait for the ram to be cleared to the ssd to move another file? It should ingest the file in the ram and then transferring it to the cache in background leaving resources free to move on with the work.

What I observe is that, for example, it takes 10seconds to transfer a 10GB mkv to Ram and than stays stucked for other 20second to transfer it to ssd drive at 500 Mb, thus resulting in 30sec total. It does not seem normal..

Link to comment
4 hours ago, mo679 said:

it looks like to be a know issue

Yes and no matter what hardware config, but not much relate to RAM cache speed, you will found I also write to array disks as mention before, my founding really different as yours.

 

4 hours ago, mo679 said:

my Mac computer hangs until the whole file is back written to the ssd

I transfer in Windows-Unraid / Unraid-Unraid, once file finish transfer to RAM cache, it never hold the transfer session, it will slowly flush to disk from RAM cache. Also as mention before, you should haven't enough RAM to cache whole file.

Edited by Vr2Io
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...