ReneV

Members
  • Posts

    133
  • Joined

  • Last visited

Everything posted by ReneV

  1. Fair enough. I am, actually But, I can't just buy everything I'm sold on. There has to be a longer-term expectation of use ...
  2. Just double-checked: I remember using the WedGUI password page and the root password does survive reboot, so I probably did what dgaschk says.
  3. I believe I had the same. If I remember correctly, I added a password with passwd in a shell login.
  4. DD-WRT has the ability to send magic packets to machines on the local network through its web interface: Administration->WOL.
  5. I'm not convinced. I certainly have computers that wake up from S3 when I touch a USB-connected mouse or keyboard, and even when I select onto or away from them on a USB-connected KVM.
  6. I'm running three servers off of one UPS. Two of them are unRAID; they spend most of their time in S3. The last one runs OpenSolaris and stays on 24/7. The UPS is connected to the OpenSolaris box with USB. The unRAID servers run apcupsd as network clients off of the OpenSolaris box. A few minutes into a power loss the OpenSolaris box sends magic packets to the two unRAID servers. The unRAID servers then wake up and poll the power-off state from the OpenSolaris box over the network and then proceed to shut down properly. This approach may or may not apply to your situation.
  7. On the off-chance that this may also slip other people's mind, the first HDDs should obviously be in the bottom of the case, to get airflow over the MB.
  8. More fans do not automatically translate to better cooling, though. Fans can very easily work against each other, and I would imagine that it would be fairly easy to create vortexes, too, or at the very least end up with parts of your case having poor airflow.
  9. At the risk of repeating myself: the top fan is likely to be more than enough for you. 1) Remove all internal obstructions to airflow, such as the fans on the back of the drive cages. 2) Tape off all holes, except in front of HDDs and over the top fan. (I.e., back, side door, unused drive cages.) 3) Consider getting a passive CPU cooler. 4) Set the top fan to low or medium. I bet you that your case will now be cool and quiet.
  10. As JoeL mentioned, your best bet is to think about airflow, rather than just where this or that fan is blowing. I'm using an Antec 900 case that at various time has held 15 drives in (airflow-restricting) backplanes. Using just the top fan with all holes taped up, except those in front of HDDs, temps stay below 45C, even during 30C+ ambient. Note that I'm not even using a CPU fan and that the top fan is at the lowest setting until 25C ambient and at the medium setting above that.
  11. This is not a response to your question, but maybe you'll find it relevant anyway. I've found that ordering things into Kids, Drama, ActionThriller, Comedy, Series, etc. makes it easier to choose on any given night.
  12. I would have opted for more than one server earlier than I did. * I'm seriously uncomfortable about the failure odds for high numbers of data disks against just one parity disk. * I would have saved money on backplanes, money which is at least comparable to the HW price of an el-cheapo server. * More than one server means that there are alternatives in case one of them needs attention, which lowers stress-levels. * I would have felt less pressure to expand with newly-bought and large-capacity HDDs. * Multiple servers can use identical configurations, add-ons, etc. What I did right was to get an Antec 900 because the huge top-fan is enough to cool that particular system.
  13. I looked at this issue a wee while back, and decided my time was better spent by just relying on Mac automounting of NFS. See my message here http://lime-technology.com/forum/index.php?topic=4918.msg62315#msg62315. You may be able to do the soft-link under /Volumes, which *should* make your unRAID box appear as just another drive to Mac OS users ... but I've never actually tried this myself. (NB! Mac OS caches NFS info, and server changes are typically not picked up automatically. The one or two times I've needed to mess around with this, I've just rebooted the Mac to flush its NFS cache but I'm sure there's an online way of doing it, too.)
  14. I wanted to look into it, too, but decided to test the transcoding power of an old Mac Mini (1.66GHz Core Duo, 2GB RAM) first. Much to my surprise, the Mac Mini can simultaneously live-transcode 3 different movies off of unRAID servers without discernible problems, and I just left it at that. (Going to 4 movies won't work, though.) The only (relatively minor) advantage to doing it this way is that there's just 1 Air Video server to connect to, serving both my unRAID servers. That said, I'm still interested in running Air Video natively under unRAID.
  15. Yes, it only shows information for specified locations that are either raw devices or mounted file systems.
  16. You can comment out the first of these 4, to make your config file server independent: #network_addr 192.168.50.67 My main edits are as follows (and they, too, are server independent): monitor_disk ( /mnt/user /dev/md1 /dev/md2 /dev/md3 /dev/md4 /dev/md5 /dev/md6 /dev/md7 /dev/md8 /dev/md9 /dev/md10 /dev/md11 /dev/md12 /dev/md13 /dev/md14 /dev/md15 /dev/md16 /dev/md17 /dev/md18 /dev/md19 /dev/md20 ) disk_rename_label /mnt/user "unRAID" Other than these, I've only changed <server_code>.
  17. I'm seeing a new issue after updating from 4.5 to 4.5.3: when booting, all disks on one of my servers show up as unformatted and the syslog says Mar 5 00:24:01 192 emhttp: disk1 mount error: 32 Stopping and re-starting once or twice does the trick. syslog-2010-03-05.txt
  18. In case it needs saying, I'm keeping a keen eye on this thread and will revisit the latest version of the sleep script as soon as something looks actionable. Personally, I have no problem with the functionality that's being discussed. NB! If I remember correctly, I changed the script file from Windows to *nix format. (It was probably a mistake to not convert it back to Windows format before uploading it; sorry.) If you have issues, try looking at line-termination issues in your local copy of the script.
  19. In my experience, that's very high. I have a handful or so of computers that are or are similar to an unRAID server, and none use more than 60W at idle. I've only ever seen >100W @ idle on computers with dual graphics cards. For example, I have an OpenSolaris workstation that idles around the numbers you quoted first: 120--130W. It has a quad-core CPU, 8GB RAM, 6 (old!) IDE HDDs, 3 SSDs, 1 SATA DVD drive, an 8600GT GPU, and a 9600GT GPU. Are you by any chance running the server with an older, inefficient graphics card and all on-board hardware enabled in the BIOS?
  20. +1 [http://lime-technology.com/forum/index.php?topic=4337.0]