Fireball3

Members
  • Posts

    1355
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Fireball3

  1. Also found this s3_sleep script: http://lime-technology.com/forum/index.php?topic=28526.msg266052#msg266052 It seems to be a standalone version cut out from the simple features plugin. There is no difference in the way how S3 sleep is called though.
  2. OK, thanks. I know the first link. The second is related to dynamix s3. Does it run standalone or is dynamix required?
  3. Would you mind sharing that script so I can have a closer look how it performs S3?
  4. Interesting! How is S3 initiated? Are there any preparations done (by a script or so)? How is S3 initiated? Only with this line? echo -n mem > /sys/power/state
  5. @the ones with working S3 sleep How is the status of the unRAID if you wake up from sleep? Any errors? (in the log) All working fine?
  6. Here is the version 1.1 of the auto_s3_sleep script. Thanks to Superorb who helped to get rid of some bugs. In the process I also made the logging more detailed. If the script runs for your satisfaction you may want to set the debug variable to 0 (no logging) to keep writes to the log (flash drive) as low as possible. Some answers to questions that arose: This script is only checking certain conditions and either shuts down the server or sends it to S3 sleep. Powerdown is done by invoking the /sbin/powerdown You should ensure that the clean powerdown script is installed, it is not part of this script! S3 sleep is done by echo -n mem > /sys/power/state It seems that sending the server to S3 sleep is not trivial though. unRAID will return many errors when waking up again. The whole extent is not clear since I don't use it. There should probably be some kind of routine to, at least, unmount the array before sleeping. More work and testing is needed so watch out when using S3! Possible settings/checks: # check for TCP activity # check for SSH connections # check for TELNET connections # check for locally logged in sessions # check for tmp/nos3sleep.lock (if this file exists, no sleep will be performed) # do not sleep if dedicated clients are pingable # only countdown outside specified hours # call smarthistory script to log SMART values before powerdown (not tested & not confirmed working) # choose between s3 sleep or powerdown (make sure you have installed the clean powerdown script) Rename the file to .sh Edit with a unix-like editor (e.g. notepad++) and configure to your needs. Update: Version 1.2 of this script is here It adds the possibility to exclude drives. Thanks go 2 maspiter.
  7. While I agree with the rest, this is not true. The differences between brands (and the fact that you pay that more or less) are more price affecting.
  8. The PSU thread. There are PSU's out in the field, like some Enermax, that can be switched from multi-rail to single-rail. There is much confusion about single and multi-rail though. Some use the term for marketing others don't although they have single rails. Some say multi-rail and have in fact a single rail. You're safe if you get a dedicated single-rail. Just make sure you have enough current [A] for start-up - it will determine the PSU dimensioning.
  9. OK, with extensive help from WeeboTech I finally managed to get the auto_s3_sleep script working. Possible settings/checks: # check for TCP activity # check for any SSH connections # check for any TELNET connections # check for any locally logged in sessions (if "no" allows console debugging) # check for tmp/nos3sleep.lock (if this file exists, no sleep will be performed) # do not sleep if dedicated clients are pingable # only countdown outside specified hours # call smarthistory script to log SMART values before powerdown (not tested & not confirmed working) # choose between s3 sleep or powerdown (make sure you have installed the clean powerdown script) Rename the file to .sh Edit: attachment temporarily removed due to a reported minor bug (bug has been found, removed but remains to be confirmed)
  10. Tank you for the link to the wiki. In my log I see the sector as location of the fault. The wiki refers to the blocks. How can I get that in line?
  11. Sorry for being unclear. It is a correcting check all the time. I started it for a few seconds just now and the error is there again. Since it's at the very beginning I won't have to wait long for it. Jun 3 00:38:36 Tuerke kernel: mdcmd (96): check CORRECT Jun 3 00:38:36 Tuerke kernel: md: recovery thread woken up ... Jun 3 00:38:36 Tuerke kernel: md: recovery thread checking parity... Jun 3 00:38:37 Tuerke kernel: md: using 2560k window, over a total of 3907018532 blocks. Jun 3 00:38:38 Tuerke auto_s3_sleep: HDD activity detected. Active HDDs: 12 Jun 3 00:38:38 Tuerke kernel: md: correcting parity, sector=65680
  12. Here I am again - did the monthly parity check and guess what... Jun 1 23:54:49 Tuerke kernel: md: recovery thread woken up ... Jun 1 23:54:49 Tuerke kernel: md: recovery thread checking parity... Jun 1 23:54:49 Tuerke kernel: md: using 2560k window, over a total of 3907018532 blocks. Jun 1 23:54:50 Tuerke kernel: md: correcting parity, sector=65680 . . .
  13. I'm interested in the solution of this problem. I also have such a drive in my array! Please post your findings rolly!
  14. OK, coming back with the result. It works. 1. set up the new share name in the unRAID GUI 2. then link the folders like described in the prior posts 3. set the permissions for the share created in #1 Cool! Thx 2 you all!
  15. Which drive? Data or parity? As far as I understand (unraid philosophy) the data drives must be OK, otherwise there would be a read error. In that case unraid would write the information again on that drive (calculated from parity). Hopefully that is not named a "parity error" also. It seems I have a permanent parity error on one and the same location. Obviously, all drives are readable but the information in that place is changing or it is not written permanently to the parity drive. Should I run an extended smartctl -t on the array drives?
  16. Probably not and the error shouldn't reappear at the same place after the second correcting check but it does!
  17. Here is the syslog. There is no pointer to any kind of error afaik. May 1 21:40:14 Tuerke kernel: md: correcting parity, sector=65680 . . . . May 2 12:05:32 Tuerke kernel: md: sync done. time=51919sec May 2 12:05:32 Tuerke kernel: md: recovery thread sync completion status: 0 But this sector 65680 is incorrect at every parity check. syslog-20140503-031845_correcting_parity.txt
  18. It is the correcting check. Had no time to collect the log yesterday. I will do at the next opportunity.
  19. This is the second time that I saw this (one and only) parity error. Must have been around 2 months ago when it appeared the first time. At the very beginning of the parity check I get one parity error. The most recent (monthly) parity check also threw this error. It seems that the parity error is not getting fixed or what? How can I find out who (drive) is causing it and why it isn't corrected permanently? I will look for the syslog and attach it when I'm back home.
  20. It's all good, as long as there is no HPA. Preclear can't wipe that afaik.
  21. I know, I was logged in but it failed. The modification done by RobJ fixed it though.
  22. Could this be a problem of the readvz64v3? I haven't checked all results that were posted. Any other results from the v3?
  23. Here are my results with readvz64v3.zip & LC_CTYPE=C /boot/preclear_disk15b.sh -f /dev/sdX As far as I can say, it seems there is no improvement compared to the initial script from JoeL? Why are the prereads so slow? ========================================================================1.15b == invoked as: /boot/preclear_disk15b.sh -A -f /dev/sdb == HitachiHDS5C4040ALE630 PL1321LAG325VH == Disk /dev/sdb has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 1000448 Bytes == Last Cycle's Pre Read Time : 16:32:21 (67 MB/s) == Last Cycle's Zeroing time : 10:53:55 (101 MB/s) == Last Cycle's Post Read Time : 16:51:45 (65 MB/s) == Last Cycle's Total Time : 44:19:12 == == Total Elapsed Time 44:19:12 == == Disk Start Temperature: 21C == == Current Disk Temperature: 28C, == ============================================================================ ========================================================================1.15b == invoked as: /boot/preclear_disk15b.sh -A -f /dev/sdc == HitachiHDS5C4040ALE630 PL1311LAG38ZJH == Disk /dev/sdc has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 1000448 Bytes == Last Cycle's Pre Read Time : 16:24:22 (67 MB/s) == Last Cycle's Zeroing time : 10:47:02 (103 MB/s) == Last Cycle's Post Read Time : 16:44:01 (66 MB/s) == Last Cycle's Total Time : 43:56:37 == == Total Elapsed Time 43:56:37 == == Disk Start Temperature: 21C == == Current Disk Temperature: 27C, == ============================================================================ ========================================================================1.15b == invoked as: /boot/preclear_disk15b.sh -A -f /dev/sdd == HitachiHDS5C4040ALE630 PL2331LAG8W1YJ == Disk /dev/sdd has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 1000448 Bytes == Last Cycle's Pre Read Time : 16:31:59 (67 MB/s) == Last Cycle's Zeroing time : 10:57:37 (101 MB/s) == Last Cycle's Post Read Time : 16:51:39 (65 MB/s) == Last Cycle's Total Time : 44:22:27 == == Total Elapsed Time 44:22:27 == == Disk Start Temperature: 21C == == Current Disk Temperature: 27C, == ============================================================================