Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About jens

  • Rank


  • Gender
  1. I also have one to give away (out of 3). London, UK. PM me if you're near.
  2. Hi, yes. Still got 2 of them running. If you need to load them with lots of data, simply start without parity disk and then rebuild parity once you're done with the transfers.
  3. I haven't updated to the RC version yet Can you define "normal shutdown" and "scripted one", please? I suppose the latter is waiting for the server to become idle and trigger the script?
  4. Yep, it had to be moved to another server. Should be online again soon. PM me if you need it. Just pull out the unRAID stick for a minute
  5. Great - anything you needed to further customize? I have possibly forgotten some things as I did most of the setup a couple months ago. Which bwm-ng package did you use, I forgot to note an URL for that and would like to update my original post with anything missing.
  6. We simply adopted the policy to not switch it off manually - which makes sense in a multi user environment. To override the power button behavior, have a look into /etc/acpi. From what I understand, LimeTech have just edited the events/default file to redirect processing to acpi_handler.sh. See http://linux.die.net/man/8/acpid for docs.
  7. Now that I think about it, I seem to remember having seen this error message before. I think it's caused by an rc.d script - they are really a mess - and can be ignored. If you wanna debug stuff, you can start a script with "bash -xv nameofscript.sh" or edit the first line of the script to read "#!/bin/bash -xv". The real question is: why is your system not powering down? And if it is powering down, why is it powering up immediately? No idea what Clean_Powerdown is, but it might interfer with the powerdown method I use in the script: if [ "$powerDownInsteadOfSleep" = $yes ] then date >> /boot/logs/auto_s3_sleep.log [ -x /sbin/powerdown ] && /sbin/powerdown [ -x /usr/local/sbin/powerdown ] && /usr/local/sbin/powerdown break; fi Do you have /sbin/powerdown - and does it work as expected? How about /usr/local/sbin/powerdown?
  8. The HP MicroServer BIOS does NOT support sleep (S3). My modification has an option powerDownInsteadOfSleep=$yes to power it down instead. If you enable checkTCP=$yes, you may need to install bwm-ng. See the original forum thread here: http://lime-technology.com/forum/index.php?topic=3657.0
  9. Remotely related bug: Some NFS exports are missing after startup. I haven't investigated further, but it seems to be caused by having multiple machine definitions, e.g. "ServerA(rw) ClientB(ro)". I was able to fix this by appending the following to my go script: # fix issues with NFS exports /usr/sbin/exportfs -v > /tmp/exportfs.before sleep 120 /usr/sbin/exportfs -v > /tmp/exportfs.then /usr/sbin/exportfs -ra /usr/sbin/exportfs -v > /tmp/exportfs.reloaded
  10. One2Go, the network won't be the bottleneck with unRAID, at least not when using it for write once read occasionally (WORO/MAID) type storage (http://en.wikipedia.org/wiki/Massive_array_of_idle_disks) of media files: 1 GBit is fast enough to stream 20 full BluRay-original-quality HD streams at the same time. Also, SATA III isn't really needed. Hard drives won't get that fast, and for writing to a cache SSD @ 500MB/s, the network would become the bottleneck (plus you would need other systems that can actually deliver data that fast). Cache disks only make sense if you have a need to "burst" large amounts of data onto the server without having the time to wait for write completion. Apart from that, you will need to accept that write speeds to the unRAID array are painfully slow.... The reason is that to calculate parity, unRAID will typically read the target disk, the parity disk, do the maths (well, XOR...) and then write to both the target and the parity disk. Writing to multiple disks would NOT speed this up: Compared to a RAID5, where a 'stripe' over several disks would be often written at the same time, the data on unRAID disks is typically not organized in a way suitable for this. As a result, unRAID would end up interleaving reads and writes at different positions on the parity disks and things would slow down even further. You might see small improvements with a different scheduler, especially when speeds differ between devices (i.e. different hard drives or connected through different controllers). Try this with cfq and other supported values: for i in /sys/block/[hs]d? ; do echo cfq > $i/queue/scheduler ; done If you have huge amounts of data to transfer to your unRAID, setup an array without parity drive, format the disks, and then either write directly to the disk shares over the network, or even better, take them out and attach them to an eSATA port on the workstation currently holding the data. When done, add the parity drive and let unRAID calculate parity for the whole array, e.g. over night. If you want faster storage in exchange for ALL drives continuously spinning / losing the ability to recover data from a single drive, setup a RAID5 or RAID10 array. FreeNAS works quite well for this, but there maybe slightly faster solutions.
  11. I have two MicroServers running unRAID. Both have been extended to run 6 drives. A third system is currently in use as ESXi server and may become another unRAID next year. Here is some information that maybe useful for other MicroServer owners. A. BIOS Update: This is required for adding a 5th and 6th hard drive. I have been using the 'Russian' BIOS mod for a while, but all my systems run a version supplied by TheBay now. I have documented the process and required settings in a PDF document [1] B. Hardware mods: I have been using the Nexus DoubleTwin [2] to mount 2 drives in the Optical Drive Bay (ODB) [3] [4]. Cables required are (i) a power splitter / Y cable from Molex to 2x SATA power; (ii) an internal SATA cable, approx 50cm long; and (iii) an external eSATA to internal SATA cable, approx. 50cm long. The 5th drive is connected using (ii), routing the cable from the motherboard to the ODB. See the silver cable in pictures [4] [5] [6]. For the 6th drive, cable (iii) is routed from the back of the case through an opening above the PCI extensions slots [7]. You can easily bend the metal on the clamp that holds down extension cards with a pair of pliers. Temps can go up a bit during parity checks, but are OK otherwise. Replacing the ODB cover with a perforated cover might be a good idea. C. Wake-On LAN: This has to be enabled in the BIOS (see [1]). The current unRAID releases have a bug in their shutdown scripts causing the network interface to be in the "up" state on powerdown. However, at least on the HP, this prevents WOL to work when the system is powered off (as compared to a WOL from S3/Sleep, which is not supported by the MicroServer BIOS). To fix, this I have added the lines below to my go file: # Fix Wake on LAN mv /etc/rc.d/rc.inet1 /etc/rc.d/rc.inet1.bak sed 's/|| \/sbin\/ifconfig/\&\& \/sbin\/ifconfig/' < /etc/rc.d/rc.inet1.bak > /etc/rc.d/rc.inet1 chmod 755 /etc/rc.d/rc.inet1 For reference, a copy of my go file can be found here [8]. There's some additional stuff in there that requires extra packages, so please adapt before use. D. Auto Poweroff: I have been using a modified version of the auto_s3_sleep.sh script from this forum, with an added powerDownInsteadOfSleep option [9]. Assuming that this script is located in the bin folder of your unRAID flash share, the following lines in the go script [8] will activate it: # Wait for disks to spindown and no network activity /boot/bin/auto_s3_sleep.sh & E. Misc enhancements: I have added very thin patches of felt to the drive holders to reduce vibrations and noise. F. Experience: The system is stable with the latest unRAID beta (b14), except that NFS on user shares (NOT disk shares) is totally buggy. Hope this helps some of you guys! [1] http://www.jens-thiel.de/static/HP/HP%20Proliant%20Microserver%20-%20Flash%20Modified%20BIOS.pdf [2] http://www.aquatuning.co.uk/product_info.php/info/p6594_Nexus-Double-Twin-HDD-decoupling.html [3] http://www.jens-thiel.de/static/HP/IMAG0127.jpg [4] http://www.jens-thiel.de/static/HP/IMAG0126.jpg [5] http://www.jens-thiel.de/static/HP/IMAG0133.jpg [6] http://www.jens-thiel.de/static/HP/IMAG0134.jpg [7] http://www.jens-thiel.de/static/HP/IMAG0135.jpg [8] http://www.jens-thiel.de/static/HP/go.txt [9] http://www.jens-thiel.de/static/HP/auto_s3_sleep.sh.txt
  12. I recently updated from b9 to b14 and revisited NFS exported user shares. Using unRAID as a backup target, transfers still abort and the disk hosting the folder "freezes" (any process trying to access it freezes, while other disks are useable). There is no directly related error message logged, but drives always seem to have spun down around that time. The only way to recover seems a hard reset (plus following parity check, which is quite annoying). I read several similar reports throughout the beta announcement threads so I thought I would open a new topic to collect any information. To Tom/Limetech: Is NFS on user shares considered stable and supported, or is this more of an experimental feature? Is there a recommended build for using NFS+SHFS? Is there anything we can provide for debugging this?
  13. Hi. Are there any (regular) special offers on Plus licenses, except for the 2 licenses pack? Considering that WHS2011 sells for less than $60 and FlexRAID is free, I'm a bit hesitant to pay $69/$119 for UnRAID (which is mstly leveraging free software).