RAM and write caching


JorgeB

Recommended Posts

Looking at this thread I was thinking of adding more RAM to one of my HP N54L servers to help cache writes, but I was trying it first on my test server and Unraid uses a very small amount of RAM for cache, I’d like to totally cache files up to 8GB, according to my tests I would need almost 64GB of RAM, N54L max RAM is 16GB.

 

The server used for testing has nothing installed, so all RAM is free, I tested with 4, 8 and 12GB, with 8GB I doubled the already above defaults md_write_limit value from 2048 to 4096, and while it helped it seems I’d have to go to very high values for what I’m looking for.

 

These are the approximate values I got on my test server:

Total RAM – md_write_limit value – size of cached writes
   4GB    –        2048          -       400MB
   8GB    –        2048          -      1100MB
   8GB    –        4096          -      1500MB
  12GB    –        2048          -      1800MB

 

Any other setting I can change to make Unraid use about half the installed RAM for caching writes?

4GB.png.7af618d1fd8f656373210eb21af91ff3.png

8GB_2048.png.713af65cf7ffdd7b41850c238afda1e7.png

8GB_4096.png.cd18c940553eb2629de7866f59e9d95b.png

12GB.png.9f8f595539744ecb9cdb8d34dd9f647e.png

Link to comment

These are settings that I have used in my go file.

YMMV

 

sysctl vm.vfs_cache_pressure=10
sysctl vm.swappiness=100
sysctl vm.dirty_ratio=20 
# (you can set it higher as an experiment). 
# sysctl vm.min_free_kbytes=8192
# sysctl vm.min_free_kbytes=65535
sysctl vm.min_free_kbytes=131072
sysctl vm.highmem_is_dirtyable=1

 

In the past with unRAID 5 vm.highmem_is_dirtyable made a big difference in caching data before being written.

I'm not so sure it matters with 64bit kernel anymore as the key does not exist.

 

Adjusting vm.dirty_ratio and vm.dirty_background_ratio may provide the improvement you are looking for.

Keep in mind that this uses the buffer cache more to temporarily store data.

If adjusting to high caching values, make sure the machine is on a UPS.

 

See also...

https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/

 

let us know how you make out.

Link to comment

These are settings that I have used in my go file.

 

Thanks, you’re the man!!

 

I’m at work now but was able to do a quick test with a trial key on server with 4GB RAM

 

Screencap 1 using default values, screencap 2 using:

 

sysctl vm.dirty_background_ratio=80*

sysctl vm.dirty_ratio=90

 

* after some more testing and for future reference the important setting is vm.dirty_ratio, set this to the percentage of RAM you want to use for write caching.

 

And yes my server is on a UPS, would not do it otherwise.

 

@strike on V6 md_write_limit can only be changed on the flash drive, flash/config/disk.cfg

1.png.e7b16f7920211cc6ebac50a6e73ef709.png

2.png.a537208f6eb64a1ee1878dd28df459e1.png

Link to comment

FWIW, when doing a high volume load onto a single disk in a single threaded manner, like a backup or restore, you can turn on turbo write.

 

This provides a significant burst for single threaded high volume writes.

 

The side effect is all disks would be spinning for the duration of active writes until turbo write is turned off.

Also parallel reads/writes to other drives can affect the speed of both activities.

After turbo write is disabled, spin down timers can take effect on idle drives.

 

It can be enabled/disabled manually in a script or via cron.

I do it via cron during my normal waking/working hours with this file in /etc/cron.d


root@unRAID:/boot/local/bin# cat /etc/cron.d/md_write_method 
30 08 * * * [ -e /proc/mdcmd ] && echo 'set md_write_method 1' >> /proc/mdcmd
30 23 * * * [ -e /proc/mdcmd ] && echo 'set md_write_method 0' >> /proc/mdcmd
#
# * * * * * <command to be executed>
# | | | | |
# | | | | |
# | | | | +---- Day of the Week   (range: 1-7, 1 standing for Monday)
# | | | +------ Month of the Year (range: 1-12)
# | | +-------- Day of the Month  (range: 1-31)
# | +---------- Hour              (range: 0-23)
# +------------ Minute            (range: 0-59)

 

 

 

Link to comment

I’m using turbo writes for most of my servers, as they are only on a few hours once a week to write new data, so keeping all disks spinning is not an issue, but for my main server that is always on I would like to avoid that, I periodically write files from 1 to 8GB, separated by a few seconds or minutes so I’m planning to upgrade to 16GB and dedicate about half for caching writes, this way 99% of my write operations will be a gigabit speed and no need to spin up all disks, thanks again.

Link to comment

Would it be possible to turn on turbo writes for when the mover runs and then turn off again afterwards?

 

It's possible,

The mover can be modified and installed from the config/go file or you can set the cron to enter the commands at the time the mover runs.

 

I believe limetech was going to set the turbo-write to be automatic at some point in the future.

i.e. If all drives are spinning, use turbo write automatically.

However that still means something needs to spin up all the drives in the mover.

 

This particular topic of enabling turbo-write for the mover should probably be a feature request where ideas on how to implement can be discussed.

Link to comment

These are settings that I have used in my go file.

 

Thanks, you’re the man!!

 

I’m at work now but was able to do a quick test with a trial key on server with 4GB RAM

 

Screencap 1 using default values, screencap 2 using:

sysctl vm.dirty_background_ratio=80
sysctl vm.dirty_ratio=90

 

And yes my server is on a UPS, would not do it otherwise.

 

@strike on V6 md_write_limit can only be changed on the flash drive, flash/config/disk.cfg

 

Alrighty then, in my case, with default settings: Transfers of around 4GiB (with 16GiB RAM) are limited by the gigabit network, then a bit after that the parity related limit kicks in. Large enough for most casual transfers, explains why I didn't really notice it earlier so thanks for pointing it out :)

 

With turbo writes I get 90-100MB/s throughout the transfer of a 10GB file. With the two dirty settings quoted, full network speed throughout the 10GB file (should've generated a larger dummy file)...

Link to comment

After a little more testing it appears there's no need to increase vm.dirty_background_ratio, this only makes unraid start writing later to the array, increasing this value will in fact make you run out of available RAM sooner, the important setting is vm.dirty_ratio, set this to the percentage of RAM you want to use for write caching.

Link to comment
  • 11 months later...

Appreciate this is an old thread but I thought it would be worth replying to as I found this via a google search when looking for ways to utilise my RAM more effectively. Does anyone have their unRAID box configured in this manner with the adjusted parameter for vm.dirty_ratio=xx?

 

I have adjusted my go file to add this parameter with a value of 50 so in theory using approximately 4GB of my RAM. Now at first I thought this was a god send because 50% of my data transfers are single files up to approximately 3GB. When testing copying to the array via my VM it would copy the whole file into the RAM cache first and I thought cracked it! However I quickly realised something was amiss when my weekly machine backup on my desktop took much longer than before to complete. After testing by making a backup of my virtual disk to the array (30GB) I noticed that when it run out of cache after about 4GB instead of seeing the usual copy speed I get of 40-42MB/s I was instead only achieving 24-26MB/s.

 

Can anyone shed any light on what is happening here? Have I read this old thread and this is no longer a wise thing to adjust? Any help appreciated!

Link to comment

I'm using vm.dirty_ratio=99 but only on one of my servers, where 99% of writes fit on RAM, like you, I noticed that for large transfers this setting makes writes considerably slower after all cache is full, I suspect because Linux is trying to flush all the cache at the same time resulting in simultaneous files writes, never good for write speed.

Link to comment

Thanks johnnie.black, at least you are seeing the same results as me.

 

I don't think this is suitable for my usage scenario then unfortunately which is a shame because it seemed like the answer! Also read some threads about 'turbo write' using all the drives spun up to perform a write operation. Might give that a whirl!

 

Thanks for the swift response.

Link to comment

We were actually just discussing these things recently, see this.  From what I've read, it may actually be better to go the opposite direction, use small vm.dirty numbers instead of very large (especially for those with large amounts of RAM) for much smoother transfers and a little less overhead.  Read the links there, and use the plugin to play with the numbers.  Then please let us know what your testing finds!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.