Jump to content

Maximum Memory Supported


Recommended Posts

This is an oversimpolification, but work with me:

 

A program running on a 32bit OS can only access 4 GB.  That is going to be 2GB shared and 2GB private.

 

All the programs on the server will share the same 2GB of shared RAM.... and each program can theoretically have their own 2GB private.

 

Stock unRAID has nothing in it that will benefit from more than 4GB of RAM. 

 

If you are brave enuf to run VMs on unRAID, each VM could make use of it.... but that's about it.

Link to comment

It gets used in the buffer cache.

PAE mode slows the CPU down slightly, but it's hardly noticeable with unRAID.

 

All I know is I beat my system up with seeding many torrents.

The extra 4GB of ram seems to help.

When I do a top it's almost all used

Notice my cached column.

 

top - 19:32:03 up 148 days, 23:26,  4 users,  load average: 0.45, 0.12, 0.04

Tasks:  93 total,   1 running,  92 sleeping,   0 stopped,   0 zombie

Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Mem:   8308848k total,  8032888k used,   275960k free,    61416k buffers

Swap:        0k total,        0k used,        0k free,  7737784k cached

 

 

For average use,  the extra 4GB probably will not be noticeable.

For a heavy use system, the extra 4GB in cache may help. it helps my system.

Link to comment

Are all you guys talking about running system with only 4GB of RAM?  Are there any negative effects if there is more than 4GB running unraid without vmware (since it appears it hasn't been officially tested)? 

 

The reason I ask this is because I am gonna be able to get some extra RAM and wondering if I possible max out my board/unRAID with whats supported which I believe is either 8GB or 12GB.

 

BTW, I do have a pro license =p.

Link to comment

About the only negative effect I can think of is that all the additional memory may not get used by any process or as buffer cache.

 

Now, I'm not running VMWare, but I tend to have a bunch of stuff I try... and I only have 512Meg of RAM.  My server runs just fine.

 

However... memory is really cheap these days... so put in whatever you want...

 

Joe L.

Link to comment

Now the question is... how do you enable PAE mode on a stock unRAID? Or is the only way to rebuild the kernel?

 

I believe PAE is already enabled in the later versions of unRAID. See announcements.

http://lime-technology.com/forum/index.php?topic=2826.msg23285#msg23285

 

As I mention below, there is a little benefit with ram over 4GB for a heavily used unRAID system

Average use will not see much benefit from ram over 4GB.

 

I do see a benefit with ram above 4GB, unless your system is very heavily used, the return on investment is small.

 

 

 

 

Link to comment

WeeboTech any chance you could run unRAID for a time without rTorrent and the like to see if SAMBA and normal usgae uses up the 8GB. If it does thats enough reason for me since every time ive given samba more memory my end user experience has improved.

Link to comment

About the only negative effect I can think of is that all the additional memory may not get used by any process or as buffer cache.

 

It gets used in the buffer cache.

 

The side effect is there is a window of ram that is used in the lower part of memory used to map ram above 4GB.

Also the more ram you have, the more the kernel has to search through to find if you have a cache hit or miss.

"sometimes" it is faster to access the disk directly.

In my case I have hundreds of files open with random chunks being read all the time, so the extra ram helps cut down disk I/O a great deal.

 

---

More information from Wikipedia.

 

The x86 processor hardware is augmented with additional address lines used to select the additional memory, so physical address size is increased from 32 bits to 36 bits. This increases maximum physical memory size from 4 GB to 64 GB. The 32-bit size of the virtual address is not changed, so regular application software continues to use instructions with 32-bit addresses and (in a flat memory model) is limited to 4 gigabytes of virtual address space. The operating system uses page tables to map this 4 GB address space into the 64 GB of RAM, and the map is usually different for each process. In this way, the extra memory is useful even though no single regular application can access it all simultaneously.

 

For application software which needs access to more than 4 GB of RAM, some special mechanism may be provided by the operating system in addition to the regular PAE support. On Microsoft Windows this mechanism is called Address Windowing Extensions, while on Unix-like systems a variety of techniques are used, such as using mmap() to map regions of a file into and out of the address space as needed.

---

 

It just so happens rtorrent uses mmap so I suppose this is part of where the extra ram is used.

 

 

Would I tell a user to go out and buy 8GB of ram. No. 4GB should suffice.

 

Link to comment

WeeboTech any chance you could run unRAID for a time without rTorrent and the like to see if SAMBA and normal usgae uses up the 8GB. If it does thats enough reason for me since every time ive given samba more memory my end user experience has improved.

 

 

Next time I reboot...

Right now I'm an archive seeder on a private tracker. So I have to keep the environment up.

Link to comment

 

I did an experiment tonight to show that the extra ram above 4GB is used by the kernel for bufferng.

This useful, If you need to heavily used file on a ramdisk for something or If you need allot of buffer caching for large files.

I'll say again, average user does not need this. It's still intersting to see the results.

Next test when I get a chance, drop the caches and do a scan down the whole /mnt tree to see how much ram it all uses.

 

 

Drop the page, inode and dentry caches.

top - 19:32:03 up 148 days, 23:26,  4 users,  load average: 0.45, 0.12, 0.04

Tasks:  93 total,  1 running,  92 sleeping,  0 stopped,  0 zombie

Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Mem:  8308848k total,  8032888k used,  275960k free,    61416k buffers

Swap:        0k total,        0k used,        0k free,  7737784k cached

 

root@Atlas /tmp #echo 3 > /proc/sys/vm/drop_caches

 

Shows that the cached value went from 7gb or so down significantly

top - 20:11:58 up 150 days, 6 min,  4 users,  load average: 0.16, 0.11, 0.09

Tasks:  93 total,  1 running,  92 sleeping,  0 stopped,  0 zombie

Cpu(s):  0.0%us,  0.0%sy,  0.0%ni, 99.3%id,  0.7%wa,  0.0%hi,  0.0%si,  0.0%st

Mem:  8308848k total,  397480k used,  7911368k free,    14984k buffers

Swap:        0k total,        0k used,        0k free,  154136k cached

 

 

Create a 6GB file on the root fs (root ram disk)

root@Atlas /tmp #dd if=/dev/zero of=bigfile bs=1024 count=6000000

6000000+0 records in

6000000+0 records out

6144000000 bytes (6.1 GB) copied, 12.6606 s, 485 MB/s

root@Atlas /tmp #ls -lH bigfile

-rw-r--r-- 1 root root 6144000000 Apr 10 20:13 bigfile

 

Shows memory is now used in the kernel. (notice cached value now).

top - 20:13:52 up 150 days, 8 min,  4 users,  load average: 0.18, 0.11, 0.09

Tasks:  93 total,  1 running,  92 sleeping,  0 stopped,  0 zombie

Cpu(s):  3.7%us,  0.0%sy,  0.0%ni, 96.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Mem:  8308848k total,  6403468k used,  1905380k free,    15016k buffers

Swap:        0k total,        0k used,        0k free,  6159700k cached

 

Remove it and look at the values now.

root@Atlas /tmp #rm bigfile

 

top - 20:14:14 up 150 days, 8 min,  4 users,  load average: 0.13, 0.11, 0.09

Tasks:  93 total,  2 running,  91 sleeping,  0 stopped,  0 zombie

Cpu(s):  2.8%us, 26.5%sy,  0.0%ni, 69.3%id,  1.2%wa,  0.0%hi,  0.1%si,  0.0%st

Mem:  8308848k total,  409772k used,  7899076k free,    15052k buffers

Swap:        0k total,        0k used,        0k free,  165908k cached

 

Notice drop in cached value.

 

OK now cat a DVD to null Notice size of cache now.

root@Atlas /mnt/disk5/Videos #ls -lH THE_PHILADELPHIA_EXPERIMENT.ISO

-rw------- 1 root root 4570101760 Jul 17  1970 THE_PHILADELPHIA_EXPERIMENT.ISO

root@Atlas /mnt/disk5/Videos #cat THE_PHILADELPHIA_EXPERIMENT.ISO > /dev/null 

 

top - 20:17:38 up 150 days, 12 min,  4 users,  load average: 0.32, 0.27, 0.16

Tasks:  93 total,  1 running,  92 sleeping,  0 stopped,  0 zombie

Cpu(s):  2.8%us, 26.5%sy,  0.0%ni, 69.3%id,  1.2%wa,  0.0%hi,  0.1%si,  0.0%st

Mem:  8308848k total,  4946964k used,  3361884k free,    20480k buffers

Swap:        0k total,        0k used,        0k free,  4698004k cached

 

copy another

root@Atlas /mnt/disk5/Videos #cat THE_GOLDEN_COMPASS.ISO > /dev/null

 

Memory status now.

top - 20:19:59 up 150 days, 14 min,  4 users,  load average: 0.81, 0.48, 0.24

Tasks:  93 total,  1 running,  92 sleeping,  0 stopped,  0 zombie

Cpu(s):  2.8%us, 26.5%sy,  0.0%ni, 69.3%id,  1.2%wa,  0.0%hi,  0.1%si,  0.0%st

Mem:  8308848k total,  8031380k used,  277468k free,    25036k buffers

Swap:        0k total,        0k used,        0k free,  7775236k cached

 

 

root@Atlas / #echo 3 > /proc/sys/vm/drop_caches

root@Atlas / #top -bn1 | head

top - 20:27:47 up 150 days, 22 min,  4 users,  load average: 0.12, 0.16, 0.16

Tasks:  94 total,  1 running,  93 sleeping,  0 stopped,  0 zombie

Cpu(s):  2.8%us, 26.5%sy,  0.0%ni, 69.3%id,  1.2%wa,  0.0%hi,  0.1%si,  0.0%st

Mem:  8308848k total,  373800k used,  7935048k free,    14964k buffers

Swap:        0k total,        0k used,        0k free,  130652k cached

 

 

 

Link to comment

What about having more memory for caching directory and file information (i-nodes)?  Looking at this article:

 

http://blog.fastmail.fm/2007/09/21/reiserfs-bugs-32-bit-vs-64-bit-kernels-cache-vs-inode-memory/

 

It looks as though a 64-bit version would help us in both areas.  What are Lime-Tech's thoughts on compiling a 64-bit version for those of us with the CPU's to run it?

 

 

Yes that's interesting.  I do want to build a 64-bit kernel & we'll try it in with a 5.x release.

Link to comment

More interesting tests

Keep in mind I have vm.vfs_cache_pressure=0

 

Drop the cache and see how much is cached.

root@Atlas / #echo 3 > /proc/sys/vm/drop_caches

root@Atlas / #top -bn1 | head                 

top - 08:39:57 up 150 days, 12:34,  4 users,  load average: 0.01, 0.05, 0.07

Tasks:  94 total,   1 running,  93 sleeping,   0 stopped,   0 zombie

Cpu(s):  2.8%us, 26.4%sy,  0.0%ni, 69.4%id,  1.2%wa,  0.0%hi,  0.1%si,  0.0%st

Mem:   8308848k total,   424008k used,  7884840k free,    14984k buffers

Swap:        0k total,        0k used,        0k free,   130612k cached

 

clear out mlocate database, do a search down the whole system, read every file stat and log it in the locate database.

root@Atlas / #ls -l /var/lib/mlocate/

total 0

root@Atlas / #/etc/cron.daily/updatedb

 

sit around whistling dixie for a long time.

 

root@Atlas / #top -bn1 | head

top - 08:56:41 up 150 days, 12:51,  4 users,  load average: 1.19, 1.03, 0.69

Tasks:  94 total,   1 running,  93 sleeping,   0 stopped,   0 zombie

Cpu(s):  2.8%us, 26.4%sy,  0.0%ni, 69.4%id,  1.2%wa,  0.0%hi,  0.1%si,  0.0%st

Mem:   8308848k total,  1633028k used,  6675820k free,   328928k buffers

Swap:        0k total,        0k used,        0k free,   825596k cached

 

So cache jumps up, but not all that high

Next test is to use find to see if there are any differences.

 

root@Atlas / #echo 3 > /proc/sys/vm/drop_caches

root@Atlas / #top -bn1 | head

top - 11:27:11 up 150 days, 15:21,  4 users,  load average: 0.05, 0.03, 0.04

Tasks:  94 total,   1 running,  93 sleeping,   0 stopped,   0 zombie

Cpu(s):  2.8%us, 26.4%sy,  0.0%ni, 69.4%id,  1.2%wa,  0.0%hi,  0.1%si,  0.0%st

Mem:   8308848k total,   641524k used,  7667324k free,    15012k buffers

Swap:        0k total,        0k used,        0k free,   161236k cached

 

root@Atlas / #find /mnt -ls > /dev/null  2>&1 

root@Atlas / #find /mnt -ls | wc -l

532354

 

root@Atlas / #top -bn1 | head

top - 11:30:44 up 150 days, 15:25,  4 users,  load average: 0.98, 0.52, 0.22

Tasks:  94 total,   1 running,  93 sleeping,   0 stopped,   0 zombie

Cpu(s):  2.8%us, 26.4%sy,  0.0%ni, 69.4%id,  1.2%wa,  0.0%hi,  0.1%si,  0.0%st

Mem:   8308848k total,   757616k used,  7551232k free,    74192k buffers

Swap:        0k total,        0k used,        0k free,   218500k cached

 

Slight differences.

Run mlocate test to show variances of find vs mlocate search.

 

 

root@Atlas / #top -bn1 | head

top - 11:32:17 up 150 days, 15:26,  4 users,  load average: 0.34, 0.42, 0.21

Tasks:  94 total,   1 running,  93 sleeping,   0 stopped,   0 zombie

Cpu(s):  2.8%us, 26.4%sy,  0.0%ni, 69.4%id,  1.2%wa,  0.0%hi,  0.1%si,  0.0%st

Mem:   8308848k total,   799292k used,  7509556k free,    75080k buffers

Swap:        0k total,        0k used,        0k free,   259632k cached

 

root@Atlas / #rm /var/lib/mlocate/mlocate.db

root@Atlas / #/etc/cron.daily/updatedb

root@Atlas / #top -bn1 | head

top - 11:32:48 up 150 days, 15:27,  4 users,  load average: 0.26, 0.39, 0.21

Tasks:  94 total,   1 running,  93 sleeping,   0 stopped,   0 zombie

Cpu(s):  2.8%us, 26.4%sy,  0.0%ni, 69.4%id,  1.2%wa,  0.0%hi,  0.1%si,  0.0%st

Mem:   8308848k total,   803892k used,  7504956k free,    75332k buffers

Swap:        0k total,        0k used,        0k free,   263728k cached

 

Point of test, There is a very small benefit of large amounts of ram unless you have heavy usage.

 

 

 

 

Link to comment

That seems to fit with what i have read over the last year. However with 4GB if find real world usage of playing avis is better. Heres a scenario i see often...

 

stick on a movie.... start falling asleep... go back to it the next day and it starts instaltly from RAM.

 

Ive absolutely no clue if 8GB would be used in this scenario but 4GB definitely is and its a real boost considering how little is cost in real money to implement.

 

 

Link to comment

That seems to fit with what i have read over the last year. However with 4GB if find real world usage of playing avis is better. Heres a scenario i see often...

 

stick on a movie.... start falling asleep... go back to it the next day and it starts instaltly from RAM.

 

Ive absolutely no clue if 8GB would be used in this scenario but 4GB definitely is and its a real boost considering how little is cost in real money to implement.

 

 

The 8GB would be used, however, if that's your usage pattern the return on investment is so small it's not worth it.

As I mention earlier if you are seeding a number of torrents, the 8GB will be used and useful.

Link to comment

The 8GB would be used, however, if that's your usage pattern the return on investment is so small it's not worth it.

 

How so? Anything that results in snappier access, especailly for files that you have already accessed, is worth the money.

 

 

4GB of RAM costs about 1% of my server costs.

Link to comment

Now the question is... how do you enable PAE mode on a stock unRAID? Or is the only way to rebuild the kernel?

 

I believe PAE is already enabled in the later versions of unRAID. See announcements.

http://lime-technology.com/forum/index.php?topic=2826.msg23285#msg23285

 

 

yes, but don't you have to enable it with an option?  I have 4GB and only 3.2 is showing.

             total       used       free     shared    buffers     cached
Mem:       3365468    3269488      95980          0     120772    2937352
-/+ buffers/cache:     211364    3154104
Swap:            0          0          0

Link to comment

Now the question is... how do you enable PAE mode on a stock unRAID? Or is the only way to rebuild the kernel?

 

I believe PAE is already enabled in the later versions of unRAID. See announcements.

http://lime-technology.com/forum/index.php?topic=2826.msg23285#msg23285

 

 

yes, but don't you have to enable it with an option?  I have 4GB and only 3.2 is showing.

             total       used       free     shared    buffers     cached
Mem:       3365468    3269488      95980          0     120772    2937352
-/+ buffers/cache:     211364    3154104
Swap:            0          0          0

See here... http://support.microsoft.com/kb/929605/en-us

It is a BIOS function on your motherboard reserving memory addresses for its own I/O and for Video that apparently must reside in the first 4 Gig of RAM..  you might be able to find an option for re-mapping memory-mapped-I/O to free up some RAM.  See here for one such example on one MB (nothing to do with unRAID, but talks about the same memory remapping feature I'm referring to) http://www.chinhdo.com/20071114/vista-epox-4gb-issue/

 

Joe L.

Link to comment

>> It is a BIOS function on your motherboard reserving memory addresses for its own I/O and for Video that apparently must reside in the first 4 Gig of RAM..

 

I always suggest using a cheap video card with the smallest amount of ram possible. I've gotten some extra ram out of a 4GB machine by doing so.

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...