Vince

Members
  • Posts

    9
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Vince's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I'm not sure that's what it is... Reading through comment 10 here: https://bugzilla.redhat.com/show_bug.cgi?id=241314#c10 Chris indicates hugemem was a set of RedHat-proprietary patches that made addressing large amounts of memory "reliable." He indicates 16GB is the maximum that a normal kernel can handle -- which is not what I'm seeing, but it could be that things are different with 3.x... I might try playing around with the mem= boot parameter to find a happy medium. As large-memory configurations become more common, it might be a good idea to have unRAID detect these scenarios and boot with a mem= flag so it won't crash at least. Is anyone else using unRAID with 16GB RAM? Thanks for the insight guys. -Vince
  2. Yes, I made a small (1GB) swap file at one point. When it crashed, it showed that it hadn't used any of it.
  3. I agree. 16GB was $89. A 64-bit kernel sure seems like it would be a good idea. Especially once you start adding apps on top of unRAID. I did try creating a 1GB swap file for fun, but it still crashed. And it hadn't used any of the swap -- at least according to the last couple lines recorded in the syslog...
  4. Looks like FTP server functionality is there and working in a default rc5 install. You should be able to FTP to your unRAID server and log in as one of your users and see your disks and user shares. What are you expecting to see from a GUI?
  5. Well, this didn't fix my problem after all. It takes longer to crash now, but it still crashes. For kicks, I took 12GB of memory out of the system, reducing it to 4GB. Everything seems to work fine now -- I promise. Does anybody have any ideas. It'd be nice to be able to throw all of this memory in there.
  6. After doing a little more research, it looks like I am running out of LOW memory. I found this post which discusses this: http://www.redhat.com/archives/taroon-list/2007-August/msg00006.html It looks like the problem stems from having a large amount of memory (16GB) and a 32-bit kernel. The kernel uses low memory to track memory allocations. Eric recommends (in order of desirability): 1) using a 64-bit kernel 2) using a hugemem kernel 3) setting vm.lower_zone_protection to at least 250 4) disabling oom-killer Since options 1 and 2 aren't really doable (easily?), I looked into adjusting the lower_zone_protection tunable. After taking a peak in /proc/sys/vm, I discovered that it doesn't exist with the 3.x kernels that unRAID 5 uses. This parameter has been changed to lowmem_reserve_ratio, but the syntax is a little different. I found a page (https://bugzilla.redhat.com/show_bug.cgi?id=536734) that recommended setting it to "256 256 250". My default values were "256 32 32". After making that change, I tried the same copy operation, and I noticed the available memory (and low memory seen by free -l) would decrease as before, but instead of everything getting killed when it got really low, it just seemed to hover there and keep chugging along. So... Is this the correct "fix?" Should unRAID detect systems with a large amount of memory and set this for us? Is it possible to get a 64-bit kernel for unRAID? Has anybody else seen this issue? Does anybody have any experiencing tuning the lowmem_reserve_ratio parameter? Thanks! -Vince
  7. Hi guys, I apologize if this has been covered... I'm a new unRAID user and installed 5.0rc5 on a new box. It has an Intel Core i7-3770 and 16GB RAM. I'm mounting an NTFS disk and attempting to copy all of the files to the array (cp -R), but it runs the system out of memory after a few minutes and the kernel starts killing all the processes. At that point, I'm able to hop on the console and get everything cleanly shutdown and reboot the box. Syslog shows something like this when memory runs out: Jul 8 19:39:22 Tower kernel: emhttp invoked oom-killer: gfp_mask=0x800d0, order=0, oom_adj=0, oom_score_adj=0 It looks like the filesystem cache is eating up all of the free memory (which should be fine), but when the system gets down to around 4GB free memory, it crashes. I wrote a little shell snippet to continuously dump the cache every 10 seconds: echo -n " ";free|grep total;while true; do echo -n "before: ";free|grep Mem; sync;echo 3 > /proc/sys/vm/drop_caches; echo -n " after: ";free|grep Mem;sleep 10; done; If I run this while I'm doing the copy, everything works great. Any ideas what's happening? Any help would be appreciated. I'm looking forward to being able to use what looks like an awesome product! Thanks, Vince syslog_201207081957.txt