Jump to content

Weird write speeds with ssd cache (unraid 6.5.3)


Nicapetarky

Recommended Posts

So I recently added a Samsung 860 Evo as a cache drive. I have a 10gbit connection between my desktop and unraid server. When I try to transfer a large file (in this example 25Gb) it starts off at the speed you would expect from a transfer between 2 SSD's over a 10gbit connection (about 450 mb/s). Then after a few seconds (5 isch) then the speed drops to around 120MB/s. 

 

During the transfer, I was watching the system stats of the unraid system and I noticed that the amount of cached ram increases during the transfer. When the cached ram usage reaches about half of my capacity is when the transfer speed starts to slow down. 

 

System specs Desktop: 

Cpu: intel core i7 6700k 

RAM: 16 gb hyperx ddr4

mobo: asus z170-A

storage: samsung 850 Evo 500Gb

nic: mellanox connectx-2

 

system specs Unraid:

Cpu: amd fx 8350 (not optimal I know)

mobo: some ASUS itx board (not sure what version, I got it from a friend)

RAM: 24 Gb (2 sticks of 8 Gb and 2 sticks of 4 Gb) Hyperx memory ddr3

storage: samsung 860 Evo 500Gb

nic: mellanox connectx-2

 

I have no clue what the issue could be.

The screenshots included are of the transfer speed and the unraid system monitoring.

 

I'm still in my trial and not that experienced with unraid so I hope someone can help me. 

ssd writespeed.PNG

systemusage.PNG

Link to comment

Bottleneck appears to be the SSD/controller (assuming it really is writing to cache and not the array), as initially the data is cache in RAM, when the RAM cache is full speed is limited by the device, you can confirm reducing the RAM cache, on the console type:

 

sysctl vm.dirty_ratio=1
sysctl vm.dirty_background_ratio=1

 

Run the copy again, it should stay at 130MB/s practically from the beginning.

 

 

Link to comment
4 minutes ago, trurl said:

Do you have a parity drive? Are you writing to a cached user share?

 

 

Yes. This share is cache only. And yes I do have a parity drive. 

 

6 minutes ago, johnnie.black said:

Bottleneck appears to be the SSD/controller (assuming it really is writing to cache and not the array), as initially the data is cache in RAM, when the RAM cache is full speed is limited by the device, you can confirm reducing the RAM cache, on the console type:

 


sysctl vm.dirty_ratio=1
sysctl vm.dirty_background_ratio=1

 

Run the copy again, it should stay at 130MB/s practically from the beginning.

 

 

3

I tried this and.. The overall speed is lower but it starts out at around 340mb/s then sort of stabilizes at around 250mb/s. Then it drops to around 125mb/s isch. I also noticed that the stability of the transfer is worse. It now transfers between 110mb/s and 140mb/s. 

 

Also it somehow is still using memory caching...

newspeed.PNG

newstats.PNG

Link to comment
2 minutes ago, johnnie.black said:

It will always use a little, but it's clear the device/controller is the limit.

Alright well, thanks for the help. 

To turn the ram caching back on, just run the previous commands but with the value 0 right?

 

oh and the device/controller is a hardware limitation?

Link to comment
27 minutes ago, Nicapetarky said:

To turn the ram caching back on, just run the previous commands but with the value 0 right?

Default is

sysctl vm.dirty_ratio=20
sysctl vm.dirty_background_ratio=10

Rebooting will also get back to default.

 

That SSD should sustain faster writes, make sure it really is writing to the cache device, it's linking @ SATA 6gbps and it's being trimmed regularly.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...