Jump to content

Any benefit to large amounts of RAM?


tucansam

Recommended Posts

My current unraid server has 8GB; my backup has 2GB.  I've read that linux does different things with buffering and caching above 8GB.  I'm debating 16GB in my next build.

 

Any benefit to this for a system that will see average use?  I'm wondering if there is a pseudo "ram disk" (like the old DOS days) setup where files written to/from the array are dumped into RAM first for faster access for the clients.

Link to comment

For a vanilla unRAID box the short answer is no.  Even with plugins I don't see over 8GB being beneficial.  With v6 and VM's, yes it would be beneficial depending on the number and type of VM's you want to run.

 

As for unRAID caching reads to RAM, I don't believe it does.  Considering most modern drives are faster than Gb Ethernet (making Ethernet the bottleneck) I don't see much advantage in it.  It certainly doesn't cache writes to RAM.

Link to comment

Not necessarily, especially on 32bit distro.  10% of 4GB is not the same as 10% of 32GB.

 

disktocachesmbwrite.jpg This is an SMB network transfer from a WD Black to an Intel SSD cache disk in unRAID over gigabit.  Captured on unRAID using bwm and plotted in calc.  Notice the high rate transfer then a drop to <10 then back up rinse repeat.

 

pcramtocachessdsmbwrite.jpg This is same size transfer of different data except instead of a WD Black as the source, was ram drive read to unRAID SSD cache disk write over SMB.

 

ramtoramsmbwrite.jpg This is an SMB transfer reading from a ram drive over gigabit link to ram drive in unRAID write, again same size and different data.

 

I took these plots on the hardware below back on 5.0-rc10 and the cache drive was an Intel X25-V SSD 40GB.  Source was Windows 7 64bit, Phenom 940, 8GB DDR2, RT8111DL, WD Black 750GB and a 3GB ram drive.  Basically what you see here is the page cache filling up and being written to disk, over and over again.  Disclaimer: I am by no means a Linux guru, just been hacking away at it for a few years learning bits here and there.

Link to comment

Any benefit to this for a system that will see average use?  I'm wondering if there is a pseudo "ram disk" (like the old DOS days) setup where files written to/from the array are dumped into RAM first for faster access for the clients.

 

It's called the buffer cache. It's part of the kernel's performance/memory management. Depending on your usage you could see performance benefits.  Files/Directories that are read very often will be cached in ram.  There's a double edged sword here.  For high width arrays (many disks), 32 bit unraid (4.x/5.x) will need low memory for md driver buffers. When using PAE for high ram, tables are required to access the high ram in a windowed fashion. This also takes up ram.

After that you need to adjust come kernel tuning to take advantage of the buffer cache.

 

With the right turnings you can burst write to the array at high speed. As the buffer cache fills and data is flushed, it slows down to the normal speed of the array; which is dependent on the drive models chosen.

Another gotcha with this is, data is in the buffer cache until it is finally flushed. A UPS is of paramount importance.

 

I've found great write benefits for many small files by running my array with allot of ram.

Some motherboards and/or controller card configurations have issues with it. Fortunately I have not had that problem.

 

In my case I do allot of work with small mp3 music files. So tagging and renaming them runs faster with the extra ram and burstable kernel turnings.

 

The low memory window will not be a hindrance once we are on 64bit unraid.

 

So more RAM is good?  Within reason?  Would a "standard" install (no plugins other than unmenu etc) benefit from 4gb?  8gb?  16gb?  32gb?

 

When operating on many small files over and over, with the right kernel tuning a benefit can be realized.

It helps with bittorrent too.

 

With the right configuration and editing a large movie file over the network, there could possibly be a benefit realized too.

However for large movie files it may be best to bring them local, operate on them there, then store them on the array later.

and in that case, it may be a benefit to upgrade the individual workstation instead.

 

It's dependent on usage pattern and count/size of files. Then there is the law of diminishing returns.

Link to comment

Basically what you see here is the page cache filling up and being written to disk, over and over again.  Disclaimer: I am by no means a Linux guru, just been hacking away at it for a few years learning bits here and there.

 

The standard unRAID kernel tunings have samba and linux flushing the cache early.

 

These are currently my kernel re-tunings.

sysctl vm.vfs_cache_pressure=10
sysctl vm.swappiness=100
sysctl vm.dirty_ratio=20 
# (you can set it higher as an experiment). 
# sysctl vm.min_free_kbytes=8192
# sysctl vm.min_free_kbytes=65535
sysctl vm.min_free_kbytes=131072
sysctl vm.highmem_is_dirtyable=1

 

along with this

for disk in /dev/md* ; do blockdev --setra 2048 $disk; done
for disk in /dev/sd* ; do blockdev --setra 2048 $disk; done

 

FWIW, for high writes, I'll end up at the 40MB/s writes also.

Where I benefit is in the shorter 100MB-1GB writes.

Then there is the whole directory scanning issue of my share over the network

 

My usage pattern is to download music to the network drive.

fire up mp3tag on the download queue/directory.

re-tag as I see fit, add the artwork. Rename, and move.

 

Lots of files scanned, read for tags, re-written, updated, moved. over and over again in those directories.

Then there is the source code mounts. etc.

Then there is the simultaneous bittorrent access to the server.

Plus My Documents /home folders. etc.

 

So moving a single file shows one pattern, but the more spread out your patterns are the more benefit is seen in smoothness of operation.

Using the md_write_method during certain periods of heavy activity also helps my small width array.

 

Once we go 64bit, I bet there will be improvements in smoothness of operation. aka. cache_dirs and memory handling.

I.E. more of our directories can be cached in ram, this allowing the user share to operate faster by alleviating disk access.

Link to comment

I'll also add, my usage patterns could differ a great deal vs others.

I beat the heck out of my unRAID server. So much that it's unable to spin down the drives during my waking hours.

All of my machines have smaller 256GB SSDs. All of the storage is out on the unRAID server. It's updating all day long.

When I copy the directories of mp3's to the server I'm bursting from 80-60MB/s up until about 500MB, then it slowly declines to 40MBs depending on how much torrent activity.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...