Jump to content

WeeboTech

Moderators
  • Posts

    9,472
  • Joined

  • Last visited

Everything posted by WeeboTech

  1. Speed is right there 100,970 KB/sec. Good speed
  2. Heavy Man... whoa! There's a reason there's wheels on that puppy. Nice Setup!
  3. Sweet Keyway, Let us know what your parity check speeds are. I have a similar build with a Coolermaster Stacker and the AOC-SATA-MV8's. (I choose them as the price was right for open box @70 each). I may upgrade to the SAS Controllers, but I'm not sure. I may fill the PCIe with an Areca for the parity/cache with SAFE RAID and use a bracket or two for floor mounting. My PSU is on top so that bottom PSU area is ripe for another couple drives hahaha! I like the look of the antec 1200 with Supermicros. I bet with the larger fans exhausting air, you might be able to get away with taking the loud sanyo denki's off the CSE-M35T-1B. I went with the GEILD from the recommendation @ newegg reviews. They are very quiet.
  4. My thought is you might loose some of the "outer track" speed benefits when writing to the smaller drives. If I write to the outer track of a 320GB drive, it's going to be using the inner track of the parity drive. I would think a threaded approach to handling parity checks would be better. If drives on different controllers were handled via parallel threads, then it might alleviate some of the waits for a drive to finish.
  5. which 4in3 modules are you using? also, as cool as your sig is, (and it is cool), I find it useful when people post their rig specs in the sig. It sorta helps anyone coming along to get an idea of good setups. I'm probably going to change mine too. heh, but my systems are constantly in flux.
  6. Can you post the smartlogs on this drive. I would be interested in seeing how it calculates 4% health.
  7. There is a way to have incremental rsync backups and multiple days worth of full backups. see my script here for hints. http://code.google.com/p/unraid-weebotech/downloads/list using -link-dest and a scheme for pointing to the most recent backup, you can have x days of archived backups and still only rsync over the changes. To discuss in further detail, start a new thread on the forum.
  8. It's a conspiracy... it's destined to fail a few days after the warranty expires j/k
  9. Thanks for posting. It's very interesting to see the results. After review I wonder how they came up with the 40% health and a target of 221 days. Somewhere there is a threshold to calculate this.
  10. What a great tool. We should check into this more and post smartlogs on the lower level lifetimes to gather stats. It could help with automating checks later on. This looks like a great tool to be used in a cron monitor.
  11. Thanks. Also thanks for pointing out inetd, that's a handy tool I'll take a closer look at. Quick -n- dirty way of exposing the mdcmd via tcpip. Note this does very little in the way of error checking the string entered. It takes no regard to permissions, it's just a way of playing for now. This really would be better done with a c program that inspected the input and write directly to the /proc/mdcmd file. In any case it's an interesting experiment. #!/bin/bash trap "rm -f /tmp/mdcmd.$$" EXIT HUP INT QUIT TERM PIPE # Define to 1 if you want to exit after each command sent # EXITAFTERREPLY=0 # EXITAFTERREPLY=1 while read -t30 do REPLY=`echo $REPLY| tr -d '\000-\010\013-\037\041-\056\072-\100\133-\140\173-\377'| tr A-Z a-z` # echo "==>'$REPLY'" # echo "$REPLY" | od -x [ -z "${REPLY}" ] && exit case "${REPLY}" in exit|EXIT|quit|QUIT|"" ) exit;; esac echo "${REPLY}" > /proc/mdcmd cat /proc/mdcmd > /tmp/mdcmd.$$ cat < /tmp/mdcmd.$$ rm -f /tmp/mdcmd.$$ [ "${EXITAFTERREPLY:=0}" -gt 0 ] && exit done inetd segment. 9090 stream tcp nowait nagios /usr/sbin/tcpd /tmp/mdcmd.sh After changing the inetd.conf file do the following root@Atlas /tmp #ps -ef | grep inetd root 1361 1 0 Feb16 ? 00:00:00 /usr/sbin/inetd Then do a kill -1 to reload the inetd.conf file root@Atlas /tmp #kill -1 1361
  12. Thanks for all the info, a lot to digest for a windows programmer. I'll try to plow through it this weekend. WeeboTech, if you think you will be updating SPINCONTROL, please let me know and I may just wait for you to do your magic before jumping in myself. Actually taking a second, closer look at it, it's a lot more straightforward than I thought at first glance. Just need to fashion a TCP/IP interface to those commands. I'll take a look at my script and update it with the new interface. It might be quicker to expose the mdcmd via inetd or an unmenu plugin and let the client program have the intelligence to decide what to do.
  13. Come to think of it I think there is a new interface for spinning the drives down (happened during the spin groups update). I think this script needs to be refined. I know the web page will be out of sync with the actual status because it caches the last known state. I'll have to take a peek at the script and update it.
  14. To write messages to syslog from a script or via file use the logger command. usage: logger [-is] [-f file] [-p pri] [-t tag] [-u socket] [ message ... ] add something like the following example to the power change script before it is enabled. (I don't have that answer, but the rest should help) logger -i -tsleepscript -plocal0.info "Switching power state to S3" Name -t or -p what fits your installation. -p must be a valid syslog facility. -t can be the script or program name.
  15. That's a trip!! I've built machines that were parts all over the place. but that's the best one I've seen yet. Thanks for sharing.
  16. Start a new thread for this please. I'm very interested. It's one of my pet peeves with unRAID.
  17. Maybe the smbstatus program can be used to reveal open files and/orlocks on the smb share.
  18. Didn't work for me either. Thought it was just me.. but it seems to affect others too.
  19. That's a good idea. I usually just have a drive sitting on the shelf waiting. After I rebuild my server with 20 drives, then I'll do this. just got an X7SBE mobo.
  20. I checked it out, it was neat. I think if anything it would be useful for power review. Perhaps a few benchmarks using the most popular large drives. For example, my drives say WD10EACS .70a 12v, .55a 5v Barracuda 7200.11 1TB, .85a 12v, .6a 5v. I would pull it off my 1.5tb's but they are busy and I do not want to shutdown the torrent client.
  21. How long does it take to bring the machine up to an accessible point (login) from an S3 state?
  22. I use hdparm. I have a script in this thread. http://lime-technology.com/forum/index.php?topic=2555.0 you would need to modify it to point to the hard drives you want to set via serial number. You can also set WD drives via a jumper.
  23. I wonder if the drives would still spin up if you set power up in standby mode? I set this option on my WD Green drives. When the system is powered on the drives are in standby mode and do not spin up until the kernel activates them. Also, how fast does the server return from S3 to power on usable state?
×
×
  • Create New...